+ All Categories
Home > Documents > Multiple Packet Transmission

Multiple Packet Transmission

Date post: 16-Oct-2014
Category:
Upload: vandana-goud
View: 31 times
Download: 0 times
Share this document with a friend
113
ENHANCING DOWNLINK PERFORMANCE IN WIRELESS NETWORKS BY SIMULTANEOUS MULTIPLE PACKET TRANSMISSION Project Report Submitted to the Faculty of Computer Applications of Jawaharlal Nehru Technological University, Hyderabad in Partial Fulfillment of the requirements for the award of degree Master of Computer Applications By MADHUKAR KUNDAPURAPU (08871F0014) Under the Guidance of Mr.K.SHIVA KUMAR Associate Professor DEPARTMENT OF COMPUTER APPLICATIONS RAMAPPA ENGINEERING COLLEGE, 1
Transcript
Page 1: Multiple Packet Transmission

ENHANCING DOWNLINK PERFORMANCE IN WIRELESS

NETWORKS BY SIMULTANEOUS MULTIPLE PACKET

TRANSMISSION

Project Report Submitted to the

Faculty of Computer Applications of

Jawaharlal Nehru Technological University, Hyderabad

in Partial Fulfillment of the requirements

for the award of degree

Master of Computer Applications

By

MADHUKAR KUNDAPURAPU

(08871F0014)

Under the Guidance of

Mr.K.SHIVA KUMAR

Associate Professor

DEPARTMENT OF COMPUTER APPLICATIONS

RAMAPPA ENGINEERING COLLEGE,

SHAYAMPET JAGIR, WARANGAL – 506001.

2008 - 2011

1

Page 2: Multiple Packet Transmission

DEPARTMENT OF COMPUTER APPLICATIONS

RAMAPPA ENGINEERING COLLEGE

SHAYAMPET JAGIR, WARANGAL

C E R T I F I C A T E

This is to certify that Mr.MADHUKAR KUNDARAPU with H.T.No.

08871F0014 of Master of Computer Applications has successfully completed the

dissertation work entitled “ENHANCING DOWNLINK PERFORMANCE IN

WIRELESS NETWORKS BY SIMULTANEOUS MULTIPLE PACKET

TRANSMISSION” in the partial fulfillment of the requirements for the award of MCA

degree during this academic year 2010-2011.

Guide Head of the Department PRINCIPAL

(K.Shiva Kumar) (Mr. S.Narasimha Rao) (Dr. V. Janaki)

EXTERNAL EXAMINE

2

Page 3: Multiple Packet Transmission

DECLARATION

I do hereby solemnly declare that the project entitled “ENHANCINGDOWNLINK

PERFORMANCE IN WIRELESS NETWORKS BY SIMULTANEOUS MULTIPLE

PACKET TRANSMISSION” is done by me as a partial fulfillment for the award of the

degree MCA (Master of Computer Applications) during the academic year 2010-2011

and no part of this work in part or full is submitted to any other University.

I further declare that this dissertation has not been submitted to any previous

course.

MADHUKAR .KUNDARAPU

HT.No:08871F0014

3

Page 4: Multiple Packet Transmission

ACKNOWLEDGEMENT

I wish to place on record my deep sense of gratitude to Dr. V. JANAKI,

Principal, RAMAPPA ENGINEERING COLLEGE, Warangal for her valuable

suggestions, advices and corrections during this work and throughout the course.

I express my deep sense of gratitude to my Supervisor, Mr.S. NARASIMHA

RAO, Associate Professor and Head, Department of Computer Applications for his

valuable guidance. I consider myself extremely fortunate to have a chance to work under

his supervision. In spite of his hectic schedule, he is always approachable to give his

valuable advice. His keen interest and constant inspiration have been of great help for me

throughout the course of this project work.

I express my deep sense of gratitude to my Supervisor, Mr.K.SHIVAKUMAR,

Guide, and Department of Computer Applications for his valuable guidance. I consider

myself extremely fortunate to have a chance to work under his supervision.

I thank all the faculty members of the Department of Computer Applications for

sharing their valuable knowledge with me. I extend my thanks to the technical staff of

the department for their valuable suggestions to technical problems.

Finally I express my gratitude and thanks to almighty, my parents, all my family

members and friends without their support, I could not have made this course (Master of

Computer Applications) and this project.

MADHUKAR.KUNDARAPU

Ht No: 08871F0014

4

Page 5: Multiple Packet Transmission

TABLE OF CONTENTS

S.NO TITLE P G .NO

CHAPTER1. INTRODUCTION

1.1. INTRODUCTION TO PROJECT 10

1.2. ORGANIZATION PROFILE 11

1.3. PURPOSE OF THE PROJECT 12

1.4. PROBLEMS IN EXISTING SYSTEM 12

1.5. SOLUTION OF THESE PROBLEMS 12

CHAPTER 2. LITERATURE SURVEY

2.1 INTRODUCTION 14

CHAPTER 3. SYSTEM ANALYSIS

3.1. INTRODUCTION 17

3.2. STUDY OF THE SYSTEM 20

3.3. HARDWARE AND SOFTWARE REQUIRMENTS 20

3.4. PROPOSED SYSTEM 21

3.5. INPUT AND OUTPUT 21

3.6. PROCESS MODULES USED WITH JUSTIFICATION 21

3.7. FEASIBILITY REPORT 22

3.7.1.TECHNICAL FEASIBILITY 24

3.7.2.OPERATIONAL FEASIBILITY 25

3.7.3. ECONOMICAL FEASIBILTY 26

3.8 SOFTWARE REQUIREMENT SPECIFICATIONS 26

3.8.1.FUNCTIONAL REQUIREMENTS 27

3.8.2 .PERFORMANCE REQUIREMENTS 28

CHAPTER4. SYSTEM DESIGN

4.1. INTRODUCTION 29

4.2 SYSTEM WORK FLOW 33

4.3 UML DIAGRAMS 37

4.4. OUTPUT SCREENS 46

5

Page 6: Multiple Packet Transmission

CHAPTER 5. IMPLEMENTATION

5.1. INTRODUCTION TO .NET FRAME WORK 48

5.2 VC#.NET 52

CHAPTER6 . TESTING

6.1.STRATEGIC APPROACH OF SOFTWARE TESTING 65

6.2 UNIT TESTING 66

6.3.SYSTEM SECURITY 68

6.3.1 INTRODUCTION 69

6.3.2 SECURITY IN SOFTWARE 69

6.4 TESTCASES 70

CHAPTER7. SCREEN SHOTS 72

CHAPTER8. SCOPE FOR FUTURE ENHANCEMENT 79

8.1WEBSITES 81

6

Page 7: Multiple Packet Transmission

TABLE OF FIGURE CONTENTS

S.NO TITLE Pg.NO

3.1.1 SOFTWARE DEVELOPMENT LIFE CYCLE 17

3.1.2 V-MODEL DIAGRAM 19

4.2.1 CONTENT TEXT OF DFD DIAGRAM 34

4.2.2 LOGIN DFD DIAGRAM 35

4.2.3 IP TRACK DFD DIAGRAM 35

4.2.4 RECEIVER IP TRACK DFD DIAGRAM 36

4.4.1 SERVER USECASE DIAGRAM 38

4.4.2 CLINT USECASE DIAGRAM 39

4.4.3 SEQUENCE LOGIN DIAGRAM 40

4.4.4 RECEIVER SEQUENCE DIAGRAM 41

4.4.5 SENDER SEQUENCE DIAGRAM 42

4.4.6 COLLABORATION LOGIN DIAGRAM 43

4.4.7 RECEIVER COLLABORATION DIAGRAM 44

4.4.8 SENDER COLLABORATION DIAGRAM 45

5.1.1 ARCHITECTURE OF CLI DIAGRAM 46

5.1.2 VERSION .NET FRAME WORK DIAGRAM 53

5.1.3SERVER SIDE MANAGED CODE DIAGRAM 54

5.2.1COMMON TYPE SYSTEM DIAGRAM 59

5.2.2. THE ROLE OF .NET ENTER PRISES ARCHITECTURE 63

6.2.1. UNIT TESTING DIAGRAM 66

7.1 SCREEN SHOOTS 72

7

Page 8: Multiple Packet Transmission

ABSTRACT

In this paper we consider using simultaneous multiple packet

transmission (MPT) to improve the downlink performance of wireless

networks. With MPT, the sender can send two compatible packets

simultaneously to two distinct receivers and can double the throughput in the

ideal case. We formalize the problem of finding a schedule to send out

buffered packets in minimum time as finding a maximum matching problem

in a graph.

8

Page 9: Multiple Packet Transmission

CHAPTER 1

INTRODUCTION

9

Page 10: Multiple Packet Transmission

1.1 INTRODUCTION TO PROJECT

WIRELESS access networks have been more and more widely used in recent

years, since compared to the wired networks, wireless networks are easier to install and

use. Due to the tremendous practical interests, much research effort has been devoted to

wireless access networks and great improvements have been achieved in the physical

layer by adopting newer and faster signal processing techniques, for example, the data

rate in wireless Local Area Network (LAN) has increased from 1 Mbps. We have noted

that in addition to increasing the point to point capacity, new signal processing techniques

have also made other novel transmission schemes possible, which can greatly improve the

performance of wireless networks. In this paper,we study a novel Multiple-Input,

Multiple-Output (MIMO) technique called Multiple Packet Transmission (MPT), with

which the sender can send more than one packet to distinct users simultaneously.

Traditionally, in wireless networks, it is assumed that one device can send to only

one other device at a time. However, this restriction is no longer true if the sender has

more than one antenna. By processing the data according to the channel state, the sender

can make the data for one user appear as zero at other users such that it can send distinct

packets to distinct users simultaneously. We call it MPT and will explain the details of it

in Section 2. For now, we want to point out the profound impact of MPT technique on

wireless LANs. A wireless LAN is usually composed of an Access Point (AP), which is

connected to the wired network, and several users, which communicate with the AP

through wireless channels. In wireless LANs, the most common type of traffic is the

downlink traffic, i.e., from the AP to the users when the users are browsing the Internet

and downloading data. In today’s wireless LAN, the AP can send one packet to one user

at a time. However, if the AP has two antennas and if MPT is used, the AP can send two

packets to two users whenever possible, thus doubling the throughout of the downlink in

the ideal case.

10

Page 11: Multiple Packet Transmission

1.2 ORGANIZATION PROFILE

Software Solutions is an IT solution provider for a dynamic environment where

business and technology strategies converge. Their approach focuses on new ways of

business combining IT innovation and adoption while also leveraging an organization’s

current IT assets. Their work with large global corporations and new products or services

and to implement prudent business and technology strategies in today’s environment.

Effectively address the business issues our customers face today.

Generate new opportunities that will help them stay ahead in the future.

THIS APPROACH RESTS ON:

A strategy where we architect, integrate and manage technology services and

solutions - we call it AIM for success.

A robust offshore development methodology and reduced demand on customer

resources.

A focus on the use of reusable frameworks to provide cost and times

benefits.

They combine the best people, processes and technology to achieve excellent

results - consistency. We offer customers the advantages of:

SPEED:

They understand the importance of timing, of getting there before the competition.

A rich portfolio of reusable, modular frameworks helps jump-start projects. Tried and

tested methodology ensures that we follow a predictable, low - risk path to achieve

results. Our track record is testimony to complex projects delivered within and evens

before schedule.

EXPERTISE:

Our teams combine cutting edge technology skills with rich domain expertise.

What’s equally important - they share a strong customer orientation that means they

actually start by listening to the customer. They’re focused on coming up with solutions

that serve customer requirements today and anticipate future needs.

11

Page 12: Multiple Packet Transmission

A FULL SERVICE PORTFOLIO:

They offer customers the advantage of being able to Architect, integrate and

manage technology services. This means that they can rely on one, fully accountable

source instead of trying to integrate disparate multi vendor solutions.

1.3 PURPOSE OF THE PROJECT

Project proposes MPT-that is multiple transmission packets. This suggests sending

packets to multiple systems simultaneously. Access point can send two packets to two

users whenever possible, thus doubling the through out of the downlink in the ideal case.

Project considers the case when the data rates of the users are the same. When the

data rates are the same, all data packets, all data packet takes the same amount of time to

transmit, which will be referred to as a time slot. Not making any assumption about the

compatilities of users and treat them as arbitrary. Project gives analytical bounds for

maximum allowable rate, which measures the speedup of the downlink.

1.4 PROBLEM IN EXISTING SYSTEM

The existing was based on the parallel computing the this will not reduce the

downloading time. Downloading file or broadcasting a file will talk so time. Each and

every System will have different response time, and there for it is difficult to predict the

download time. Existing method uses sending in store-forward multicasting format to one

system another system. This types also not an efficient one.

Disadvantages:

1. Take lot of time for sending a single file.

2. Worthless in wireless connections.

3. Do not considering user convenience.

12

Page 13: Multiple Packet Transmission

CHAPTER 2

LITERATURE

SURVEY

13

Page 14: Multiple Packet Transmission

2.1 LITERATURE SURVEY

WIRELESS access networks have been more and morewidely used in recent

years, since compared to thewired networks, wireless networks are easier to installand

use. Due to the tremendous practical interests, muchresearch effort has been devoted to

wireless access networks and great improvements have been achieved in the physical

layer by adopting newer and faster signal processing techniques, for example, the data

rate in 802.11 wireless Local Area Network (LAN) has increased from 1 Mbps in the

early version of 802.11b to 54 Mbps in 802.11a [8]. We have noted that in addition to

increasing the point to point capacity, new signal processing techniques have also made

other novel transmission schemes possible, which can greatly improve the performance of

wireless networks. In this paper,we study a novel Multiple-Input, Multiple-Output

(MIMO) technique called Multiple Packet Transmission (MPT) [1],with which the sender

can send more than one packet to distinct users simultaneously.Traditionally, in wireless

networks, it is assumed that one device can send to only one other device at a

time.However, this restriction is no longer true if the sender has more than one antenna.

By processing the data according to the channel state, the sender can make the data for

one user appear as zero at other users such that it can send distinct packets to distinct

users simultaneously.

We call it MPT and will explain the details of it in Section 2. For now, we want to

point out the profound impact of MPT technique on wireless LANs. A wireless LAN is

usually composed of an Access Point (AP), which is connected to the wired network, and

several users, which communicate with the AP through wireless channels. In wireless

LANs, the most common type of traffic is the downlink traffic, i.e., from the AP to the

users when the users are browsing the Internet and downloading data. In today’s wireless

LAN, the AP can send one packet to one user at a time. However, if the AP has two

antennas and if MPT is used, the AP can send two packets to two users whenever

possible, thus doubling the throughout of the downlink in the ideal case.

MPT is feasible for the downlink because it is not difficult to equip the AP with

two antennas, in fact, many wireless routers today have two antennas. Another advantage

14

Page 15: Multiple Packet Transmission

of MPT that makes it very commercially appealing is that although MPT needs new

hardware at the sender, it does not need any new hardware at the receiver. This means

that to use MPT in a wireless LAN, we can simply replace the AP and upgrade software

protocols in the user devices without having to change their wireless cards and, thus,

incurring minimum cost. In this paper, we study problems related to MPT and provide our

solutions. We formalize the problem of sending out buffered packets in minimum time as

finding a maximum matching in a graph. Since maximum matching algorithms are

relatively complex and may not meet the speed of real-time applications, we consider

using approximation algorithms 706 IEEE TRANSACTIONS ON COMPUTERS, VOL.

58, NO. 5, MAY 2009. Z. Zhang is with the Department of Computer Science, Florida

State University, Tallahasse, FL 32306. E-mail: [email protected]. Y. Yang and M.

Zhao are with the Department of Electrical and Computer Engineering, State University

of New York, Stony Brook, NY 11794. E-mail: {yang, mzhao}@ece.sunysb.edu.

Manuscript received 10 May 2007; revised 30 July 2008; accepted 3 Sept.2008; published

online 16 Oct. 2008. Recommended for acceptance by G. Lipari. For information on

obtaining reprints of this article, please send e-mail to: [email protected], and reference

IEEECS Log Number TC-2007-05-0157. Digital Object Identifier no.

10.1109/TC.2008.191. 0018-9340/09/$25.00 _ 2009 IEEE Published by the IEEE

Computer Society and present an algorithm that finds a matching with size at least 3/4 of

the size of the maximum matching inOðjEjÞ time, where jEj is the number of edges in the

graph. We then study the performance of a wireless LAN enhanced with MPT and give

analytical bounds for maximum allowable arrival rate.

We also use an analytical model and simulations to study the average packet

delay.

Enhancing wireless LANs with MPT requires the Media Access Control (MAC)

layer to have more knowledge about the states of the physical layer and is therefore a

form of crosslayer design. In recent years, cross-layer design in wireless

networks has attracted much attention because of the great benefits in breaking the

layer boundary. For example, Liu et al. [5] and Kawadia and Kumar [6] considered

packet

scheduling and transmission power control in cross-layer wireless networks.

However, to the best of our knowledge, packet scheduling in wireless networks in the

context ofMPT has not been studied before. Lang et al. [3] and Dimic et al. [4] have

considered Multiple Packet Reception (MPR), which means the receiver can receive more

15

Page 16: Multiple Packet Transmission

than one packet from distinct users simultaneously. gives performance analysis. Section 6

concludes this paper. Finally, the Appendix contains some mathematical derivations and

discussions on the user compatibility probability in wireless LANs.

CHAPTER 3

System Analysis

16

Page 17: Multiple Packet Transmission

3.1 INTRODUCTION

3.1 Software Development Life Cycle:-

There is various software development approaches defined and designed

which are used/employed during development process of software, these approaches are

also referred as "Software Development Process Models". Each process model follows a

particular life cycle in order to ensure success in process of software development.

Fig. 1 Software Development Life Cycle

Requirements

Business requirements are gathered in this phase.  This phase is the main focus of

the project managers and stake holders.  Meetings with managers, stake holders and users

are held in order to determine the requirements.  Who is going to use the system?  How

will they use the system?  What data should be input into the system?  What data should

be output by the system?  These are general questions that get answered during a

requirements gathering phase.  This produces a nice big list of functionality that the

system should provide, which describes functions the system should perform, business

17

Page 18: Multiple Packet Transmission

logic that processes data, what data is stored and used by the system, and how the user

interface should work.  The overall result is the system as a whole and how it performs,

not how it is actually going to do it.

Design

The software system design is produced from the results of the requirements

phase.  Architects have the ball in their court during this phase and this is the phase in

which their focus lies.  This is where the details on how the system will work is

produced.  Architecture, including hardware and software, communication, software

design (UML is produced here) are all part of the deliverables of a design phase.

Implementation

Code is produced from the deliverables of the design phase during

implementation, and this is the longest phase of the software development life cycle. 

For a developer, this is the main focus of the life cycle because this is where the code

is produced.  Implementation my overlap with both the design and testing phases.  Many

tools exists (CASE tools) to actually automate the production of code using information

gathered and produced during the design phase.

Testing

During testing, the implementation is tested against the requirements to make sure

that the product is actually solving the needs addressed and gathered during the

requirements phase.  Unit tests and system/acceptance tests are done during this phase. 

Unit tests act on a specific component of the system, while system tests act on the system

as a whole.

So in a nutshell, that is a very basic overview of the general software development

life cycle model.  Now lets delve into some of the traditional and widely used variations.

V-Model

An SDLC (System Development Life Cycle) approach that is sometimes referred to as

the ‘V-Model’. Here all levels of requirements and specifications, be they the

development of core software, customized software or implementation of configurable

items, follow the model shown in the following illustration.

18

Page 19: Multiple Packet Transmission

This approach associates various levels of requirements, specifications and

development items to be properly documented and the response to each specification

level is an equivalent testing (validation) and documentation of testing to ensure the

quality of the software that is either developed or implemented.

Verification Phases

1. Requirements analysis:

In this phase, the requirements of the proposed system are collected by analyzing

the needs of the user(s). This phase is concerned about establishing what the ideal system

has to perform.. The user requirements document will typically describe the system’s

functional, physical, interface, performance, data, security requirements etc as expected

by the user. It is one which the business analysts use to communicate their understanding

of the system back to the users. The users carefully review this document as this

document would serve as the guideline for the system.

2. System Design:

System engineers analyze and understand the business of the proposed system. By

studying the user requirements document. They figure out possibilities and Techniques by

which the user requirements can be implemented. If any of the requirements are not

feasible, the user is informed of the issue. A resolution is found and the user requirement

document is edited accordingly. The software specification document which serves as a

blueprint for the development phase is generated. This document contains the general

system organization, menu structures, data structures etc. It may also hold example

business scenarios, sample windows, reports for the better understanding.

Other technical documentation like entity diagrams, data dictionary will also

Be produced in this phase. The document for system testing is prepared in this phase.

3. Architecture Design:

This phase can also be called as high-level design. The baseline in selecting the

architecture is that it should realize all which typically consists of the list of modules,

brief functionality of each module, their interface relationships, dependencies, database

tables, architecture diagrams, technology details etc. The integration testing design is

carried out in this phase.

19

Page 20: Multiple Packet Transmission

4. Module Design:

This phase can also be called as low-level design. The designed system is broken

up in to smaller units or modules and each of them is explained so that the programmer

can start coding directly. The low level design document or program specifications will

contain a detailed functional logic of the module, in pseudo code - database tables, with

all elements, including their type and size - all interface details with complete API

references- all dependency issues- error message listings- complete input and outputs for

a module. The unit test design is developed in this stage.

3.2 STUDY OF THE SYSTEM

In the flexibility of uses the interface has been developed a graphics concepts in

mind, associated through a browser interface. The GUI’s at the top level has been

categorized as follows

1. Administrative User Interface Design

2. The Operational and Generic User Interface Design

The administrative user interface concentrates on the consistent information that is

practically, part of the organizational activities and which needs proper authentication for

the data collection. The Interface helps the administration with all the transactional

states like data insertion, data deletion, and data updating along with executive data

search capabilities.

The operational and generic user interface helps the users upon the system in

transactions through the existing data and required services. The operational user

interface also helps the ordinary users in managing their own information helps the

ordinary users in managing their own information in a customized manner as per the

assisted flexibilities

Modules Involved

WINDOWS OS (XP)

Visual Studio .Net 2005 Enterprise Edition

Internet Information Server 6.0 (IIS)

Visual Studio .Net Framework (Minimal for Deployment) version 3.5

Microsoft Visual C# .Net

3.3 HARDWARE SPECIFICATIONS

Hardware Requirements:

20

Page 21: Multiple Packet Transmission

PIV 2.8 GHz Processor and Above

RAM 1 GB and Above

HDD 40 GB Hard Disk Space and Above

3.4 PROPOSED SYSTEM

Project proposes MPT-that is multiple transmission packets. This suggests sending

packets to multiple systems simultaneously. Access point can send two packets to two

users whenever possible, thus doubling the throughout of the downlink in the ideal case.

Project considers the case when the data rates of the users are the same. When the data

rates are the same, all data packets, all data packet takes the same amount of time to

transmit, which will be referred to as a time slot. Not making any assumption about the

capabilities of users and treat them as arbitrary. Project gives analytical bounds for

maximum allowable rate, which measures the speedup of the downlink.

Advantages:

1. Reducing actual file transfer time, download time for each user can be minimized.

2. Packet delay can be greatly reduced even with a very small compatibility.

3. The maximum arrival rate increase significantly.

3.5 INPUT AND OUTPUT

The major inputs and outputs and major functions of the system are follows:

Module

Server Module

Given Input - Work group name

Expected Output - Available systems in the network

Client Module

Given Input – Select the file receiving path

Expected Output – The users can get the respective source files from server.

3.6 PROCESS MODEL USED WITH JUSTIFICATION

ACCESS CONTROL FOR DATA WHICH REQUIRE USER AUTHENTICATION

The following commands specify access control identifiers and they are typically

used to authorize and authenticate the user (command codes are shown in parentheses)

21

Page 22: Multiple Packet Transmission

USER NAME (USER)

The user identification is that which is required by the server for access to its file

system. This command will normally be the first command transmitted by the user after

the control connections are made (some servers may require this).

PASSWORD (PASS)

This command must be immediately preceded by the user name command, and,

for some sites, completes the user's identification for access control. Since password

information is quite sensitive, it is desirable in general to "mask" it or suppress type out.

3.7 FEASIBILITY REPORT

Preliminary investigation examine project feasibility, the likelihood the system

will be useful to the organization. The main objective of the feasibility study is to test the

Technical, Operational and Economical feasibility for adding new modules and

debugging old running system. All system is feasible if they are unlimited resources and

infinite time. There are aspects in the feasibility study portion of the preliminary

investigation

Technical Feasibility

Operational Feasibility

Economical Feasibility

3.7.1 TECHNICAL FEASIBILITY

The technical issue usually raised during the feasibility stage of the investigation

includes the following:

Does the necessary technology exist to do what is suggested?

Do the proposed equipments have the technical capacity to hold the data required to

use the new system?

Will the proposed system provide adequate response to inquiries, regardless of the

number or location of users?

Can the system be upgraded if developed?

Are there technical guarantees of accuracy, reliability, ease of access and data

security?

Earlier no system existed to cater to the needs of ‘Secure Infrastructure

Implementation System’. The current system developed is technically feasible. It is a web

based user interface for audit workflow at NIC-CSD. Thus it provides an easy access to

22

Page 23: Multiple Packet Transmission

the users. The database’s purpose is to create, establish and maintain a workflow among

various entities in order to facilitate all concerned users in their various capacities or

roles. Permission to the users would be granted based on the roles specified.

Therefore, it provides the technical guarantee of accuracy, reliability and security.

The software and hard requirements for the development of this project are not many and

are already available in-house at NIC or are available as free as open source. The work

for the project is done with the current equipment and existing software technology.

Necessary bandwidth exists for providing a fast feedback to the users irrespective of the

number of users using the system.

3.7.2. OPERATIONAL FEASIBILITY

Proposed projects are beneficial only if they can be turned out into information

system. That will meet the organization’s operating requirements. Operational feasibility

aspects of the project are to be taken as an important part of the project implementation.

Some of the important issues raised are to test the operational feasibility of a project

includes the following: -

Is there sufficient support for the management from the users?

Will the system be used and work properly if it is being developed and implemented?

Will there be any resistance from the user that will undermine the possible application

benefits?

This system is targeted to be in accordance with the above-mentioned issues.

Beforehand, the management issues and user requirements have been taken into

consideration. So there is no question of resistance from the users that can undermine the

possible application benefits.The well-planned design would ensure the optimal

utilization of the computer resources and would help in the improvement of performance

status.

3.7.3 ECONOMICAL FEASIBILITY

A system can be developed technically and that will be used if installed must still

be a good investment for the organization. In the economical feasibility, the development

cost in creating the system is evaluated against the ultimate benefit derived from the new

systems. Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or

software. Since the interface for this system is developed using the existing resources and

23

Page 24: Multiple Packet Transmission

technologies available at NIC, There is nominal expenditure and economical feasibility

for certain.

3.8. SOFTWARE REQUIREMENT SPECIFICATIONS

INTRODUCTION

Purpose: The main purpose for preparing this document is to give a general insight into

the analysis and requirements of the existing system or situation and for determining the

operating characteristics of the system.

Scope: This Document plays a vital role in the development life cycle (SDLC) and it

describes the complete requirement of the system. It is meant for use by the developers

and will be the basic during testing phase. Any changes made to the requirements in the

future will have to go through formal change approval process.

DEVELOPERS RESPONSIBILITIES OVERVIEW:

The developer is responsible for:

Developing the system, which meets the SRS and solving all the requirements of the

system?

Demonstrating the system and installing the system at client's location after the

acceptance testing is successful.

Submitting the required user manual describing the system interfaces to work on it

and also the documents of the system.

Conducting any user training that might be needed for using the system.

Maintaining the system for a period of one year after installation.

3.8.1 FUNCTIONAL REQUIREMENTS

OUTPUT DESIGN

Outputs from computer systems are required primarily to communicate the results

of processing to users. They are also used to provides a permanent copy of the results for

later consultation. The various types of outputs in general are:

External Outputs, whose destination is outside the organization

Internal Outputs whose destination is within organization and they are the

User’s main interface with the computer.

24

Page 25: Multiple Packet Transmission

Operational outputs whose use is purely within the computer department.

Interface outputs, which involve the user in communicating directly.

OUTPUT DEFINITION

THE OUTPUTS SHOULD BE DEFINED IN TERMS OF THE FOLLOWING POINTS:

Type of the output

Content of the output

Format of the output

Location of the output

Frequency of the output

Volume of the output

Sequence of the output

It is not always desirable to print or display data as it is held on a computer. It

should be decided as which form of the output is the most suitable.

INPUT DESIGN

Input design is a part of overall system design. The main objective during the

input design is as given below:

To produce a cost-effective method of input.

To achieve the highest possible level of accuracy.

To ensure that the input is acceptable and understood by the user.

INPUT STAGES:

The main input stages can be listed as below:

Data recording

Data transcription

Data conversion

Data verification

Data control

Data transmission

Data validation

Data correction

INPUT TYPES:

25

Page 26: Multiple Packet Transmission

It is necessary to determine the various types of inputs. Inputs can be categorized as

follows:

External inputs, which are prime inputs for the system.

Internal inputs, which are user communications with the system.

Operational, which are computer department’s communications to the system?

INPUT MEDIA:

At this stage choice has to be made about the input media. To conclude about the

input media consideration has to be given to;

Type of input

Flexibility of format

Speed

Accuracy

Verification methods

Rejection rates

Ease of correction

Storage and handling requirements

Security

Easy to use

Portability

Keeping in view the above description of the input types and input media, it can

be said that most of the inputs are of the form of internal and interactive. As

Input data is to be the directly keyed in by the user, the keyboard can be considered to be

the most suitable input device.

ERROR AVOIDANCE

At this stage care is to be taken to ensure that input data remains accurate form the

stage at which it is recorded up to the stage in which the data is accepted by the system.

This can be achieved only by means of careful control each time the data is handled.

ERROR DETECTION

Even though every effort is make to avoid the occurrence of errors, still a small

proportion of errors is always likely to occur, these types of errors can be discovered by

using validations to check the input data.

DATA VALIDATION

26

Page 27: Multiple Packet Transmission

Procedures are designed to detect errors in data at a lower level of detail. Data

validations have been included in the system in almost every area where there is a

possibility for the user to commit errors. The system will not accept invalid data.

Whenever an invalid data is keyed in, the system immediately prompts the user and the

user has to again key in the data and the system will accept the data only if the data is

correct. Validations have been included where necessary.

The system is designed to be a user friendly one. In other words the system has

been designed to communicate effectively with the user. The system has been designed

with popup menus.

3.8.2. PERFORMANCE REQUIREMENTS

Performance is measured in terms of the output provided by the application.

Requirement specification plays an important part in the analysis of a system. Only when

the requirement specifications are properly given, it is possible to design a system, which

will fit into required environment. It rests largely in the part of the users of the existing

system to give the requirement specifications because they are the people who finally use

the system. This is because the requirements have to be known during the initial stages

so that the system can be designed according to those requirements. It is very difficult to

change the system once it has been designed and on the other hand designing a system,

which does not cater to the requirements of the user, is of no use.

The requirement specification for any system can be broadly stated as given below:

The system should be able to interface with the existing system

The system should be accurate

The system should be better than the existing system

27

Page 28: Multiple Packet Transmission

CHAPTER 4

System Design

28

Page 29: Multiple Packet Transmission

4.1 INTRODUCTION

Software design sits at the technical kernel of the software engineering process

and is applied regardless of the development paradigm and area of application. Design is

the first step in the development phase for any engineered product or system. The

designer’s goal is to produce a model or representation of an entity that will later be built.

Beginning, once system requirement have been specified and analyzed, system design is

the first of the three technical activities -design, code and test that is required to build and

verify software.

The importance can be stated with a single word “Quality”. Design is the place

where quality is fostered in software development. Design provides us with

representations of software that can assess for quality. Design is the only way that we can

accurately translate a customer’s view into a finished software product or system.

Software design serves as a foundation for all the software engineering steps that follow.

Without a strong design we risk building an unstable system – one that will be difficult to

test, one whose quality cannot be assessed until the last stage.During design, progressive

refinement of data structure, program structure, and procedural details are developed

reviewed and documented. System design can be viewed from either technical or project

management perspective. From the technical point of view, design is comprised of four

activities – architectural design, data structure design, interface design and procedural

design.

4.2 SYSTEM WORK FLOW

NORMALIZATION

It is a process of converting a relation to a standard form. The process is used to

handle the problems that can arise due to data redundancy i.e. repetition of data in the

database, maintain data integrity as well as handling problems that can arise due to

29

Page 30: Multiple Packet Transmission

insertion, pupation, deletion anomalies. Decomposing is the process of splitting relations

into multiple relations to eliminate anomalies and maintain anomalies and maintain data

integrity. To do this we use normal forms or rules for structuring relation.

Insertion anomaly: Inability to add data to the database due to absence of other data.

Deletion anomaly: Unintended loss of data due to deletion of other data.

Update anomaly: Data inconsistency resulting from data redundancy and partial update.

Normal Forms: These are the rules for structuring relations that eliminate anomalies.

FIRST NORMAL FORM:

A relation is said to be in first normal form if the values in the relation are atomic

for every attribute in the relation. By this we mean simply that no attribute value can be a

set of values or, as it is sometimes expressed, a repeating group.

SECOND NORMAL FORM:

A relation is said to be in second Normal form is it is in first normal form and it

should satisfy any one of the following rules.

1) Primary key is a not a composite primary key

2) No non key attributes are present

3) Every non key attribute is fully functionally dependent on full set

of primary key.

THIRD NORMAL FORM:

A relation is said to be in third normal form if their exits no transitive

dependencies.

Transitive Dependency: If two non key attributes depend on each other as well as on

the primary key then they are said to be transitively dependent.

The above normalization principles were applied to decompose the data in

multiple tables thereby making the data to be maintained in a consistent state.

E – R DIAGRAMS

The relation upon the system is structure through a conceptual ER-

Diagram, which not only specifics the existential entities but also the standard

relations through which the system exists and the cardinalities that are necessary for

the system state to continue.

30

Page 31: Multiple Packet Transmission

The entity Relationship Diagram (ERD) depicts the relationship between

the data objects. The ERD is the notation that is used to conduct the date modeling

activity the attributes of each data object noted is the ERD can be described resign a

data object descriptions.

The set of primary components that are identified by the ERD are

Data object Relationships

Attributes Various types of indicators.

The primary purpose of the ERD is to represent data objects and their

relationships.

DATA FLOW DIAGRAMS

A data flow diagram is graphical tool used to describe and analyze movement of

data through a system. These are the central tool and the basis from which the other

components are developed. The transformation of data from input to output, through

processed, may be described logically and independently of physical components

associated with the system. These are known as the logical data flow diagrams. The

physical data flow diagrams show the actual implements and movement of data between

people, departments and workstations. A full description of a system actually consists of

a set of data flow diagrams. Using two familiar notations Yourdon, Gane and Sarson

notation develops the data flow diagrams. Each component in a DFD is labeled with a

descriptive name. Process is further identified with a number that will be used for

identification purpose. The development of DFD’S is done in several levels. Each

process in lower level diagrams can be broken down into a more detailed DFD in the next

level. The lop-level diagram is often called context diagram. It consists a single process

bit, which plays vital role in studying the current system. The process in the context level

diagram is exploded into other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding

at one level of detail is exploded into greater detail at the next level. This is done until

further explosion is necessary and an adequate amount of detail is described for analyst to

understand the process. A DFD is also known as a “bubble Chart” has the purpose of

clarifying system requirements and identifying major transformations that will become

programs in system design. So it is the starting point of the design to the lowest level of

detail. A DFD consists of a series of bubbles joined by data flows in the system.

DFD SYMBOLS:

31

Page 32: Multiple Packet Transmission

In the DFD, there are four symbols

1. A square defines a source(originator) or destination of system data

2. An arrow identifies data flow. It is the pipeline through which the

information flows

3. A circle or a bubble represents a process that transforms incoming data

flow into outgoing data flows.

4. An open rectangle is a data store, data at rest or a temporary repository of

data

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFD’S:

1. Process should be named and numbered for an easy reference. Each name should be

representative of the process.

2. The direction of flow is from top to bottom and from left to right. Data traditionally

flow from source to the destination although they may flow back to the source. One

way to indicate this is to draw long flow line back to a source. An alternative way is

to repeat the source symbol as a destination. Since it is used more than once in the

DFD it is marked with a short diagonal.

3. When a process is exploded into lower level details, they are numbered.

4. The names of data stores and destinations are written in capital letters. Process and

dataflow names have the first letter of each work capitalized

32

Page 33: Multiple Packet Transmission

SAILENT FEATURES OF DFD’S

1. The DFD shows flow of data, not of control loops and decision are controlled

considerations do not appear on a DFD.

2. The DFD does not indicate the time factor involved in any process whether the

dataflow take place daily, weekly, monthly or yearly.

3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS

1. Current Physical

2. Current Logical

3. New Logical

4. New Physical

CURRENT PHYSICAL:

In Current Physical DFD process label include the name of people or their

positions or the names of computer systems that might provide some of the overall

system-processing label includes an identification of the technology used to process the

data. Similarly data flows and data stores are often labels with the names of the actual

physical media on which data are stored such as file folders, computer files, business

forms or computer tapes.

CURRENT LOGICAL:

The physical aspects at the system are removed as mush as possible so that the

current system is reduced to its essence to the data and the processors that transforms

them regardless of actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with

user were completely happy with the functionality of the current system but had problems

with how it was implemented typically through the new logical model will differ from

current logical model while having additional functions, absolute function removal and

inefficient flows recognized.

NEW PHYSICAL:

33

Page 34: Multiple Packet Transmission

The new physical represents only the physical implementation of the new system.

DATA FLOW

1) A Data Flow has only one direction of flow between symbols. It may flow in both

directions between a process and a data store to show a read before an update. The

later is usually indicated however by two separate arrows since these happen at

different type.

2) A join in DFD means that exactly the same data comes from any of two or more

different processes data store or sink to a common location.

3) A data flow cannot go directly back to the same process it leads. There must be

atleast one other process that handles the data flow produce some other data flow

returns the original data into the beginning process.

4) A Data flow to a data store means update (delete or change).

5) A data Flow from a data store means retrieve or use.

DFD Diagrams:

Context Level (0th Level) DFD

34

Page 35: Multiple Packet Transmission

Fig. 1. context Level of DFD Diagram

Login DFD:

35

Page 36: Multiple Packet Transmission

Fig. 2 login DFD diagram

Sender IPTra

Fig.3.IP Track DFD Diagram

Receiver IPTrack DFD:

36

Page 37: Multiple Packet Transmission

Fig.4.receiver IP Track DFD diagram

37

Page 38: Multiple Packet Transmission

4.3 UML DIAGRAMS

The Unified Modeling Language (UML) is used to specify, visualize, modify,

construct and document the artifacts of an object-oriented software intensive system

under development. The UML uses mostly graphical notations to express the design of

software projects.  UML offers a standard way to visualize a system's architectural

blueprints, including elements such as:

actors

business processes

(logical) components

activities

programming language statements

database schemas, and

Reusable software components.

38

Page 39: Multiple Packet Transmission

Use Case Diagram:

Server Use Case Diagram:-

System

Server

Login

Sender Home Page

Display the Peer list

Send Data

Logout

Sequence File Transfer

Simultaneous File Transfer

<<extend>>

<<extend>>

Fig.1.Server Use Case Diagram

39

Page 40: Multiple Packet Transmission

Client Use Case Diagram:-

System

Client

Login

Client Home Page

Start Server

Connect Server

Recive the files

Logout

Fig.2.Client Use Case Diagram

40

Page 41: Multiple Packet Transmission

Sequence Diagrams:

Login Diagrams:

Fig.3Sequence Login Diagrams:

: User

Login Form Business Logic Layer Data Access Layer Database

1 : Open Form()

2 : Enter Username &Pwd()

3 : Connection Class()

4 : Request()

5 : Response()

6 : Result()

7 : Show Result()

41

Page 42: Multiple Packet Transmission

Reciver Sequence diagram:

Fig.4 Reciver Sequence diagram

sender login Receive Screte key workgroup Send data exit

1 : enter uid,pwd()

2 : Invalid()

3 : Get Screte Key()

4 : Gere it will display active peerlist()

5 : depending on that peer list you can send the data()

6 : exit()

42

Page 43: Multiple Packet Transmission

Sender Sequence Diagram:

IPTraceBack login send screte key workgroupReceive data

exit

1 : enter uid,pwd()

2 : Invalid()

3 : send screte key()

4 : Gere it will display active peerlist()

5 : depending on that peer list you can Receive the data()

6 : exit()

Fig.5. Sender Sequence Diagram

43

Page 44: Multiple Packet Transmission

DATA DICTONARY

After carefully understanding the requirements of the client the the entire data

storage requirements are divided into tables. The below tables are normalized to avoid

any anomalies during the course of data entry.

Database tables:

Collaboration Diagram

Login Collaboration Diagrams:

44

Page 45: Multiple Packet Transmission

Fig.6.Collaboration Login DiagramLogin

Receiver Collaboration Diagrams:

: User

Login Form

Business Logic Layer

Data Access Layer

Database

1 : Open Form()

2 : Enter Username &Pwd()

3 : Connection Class()

4 : Request()

5 : Response()

6 : Result()

7 : Show Result()

45

Page 46: Multiple Packet Transmission

sender

login

Receive Screte key

workgroup

Senddata

exit

1 : enter uid,pwd()

2 : Invalid()

3 : Get Screte Key()

4 : Gere it will display active peerlist()

5 : depending on that peer list you can send the data()

6 : exit()

Fig.7. Receiver Collaboration Diagrams

46

Page 47: Multiple Packet Transmission

Sender Collaboration Diagrams:

IPTraceBack

login

send screte key

workgroup

Receive data

exit

1 : enter uid,pwd()

2 : Invalid()

3 : send screte key()

4 : Gere it will display active peerlist()

5 : depending on that peer list you can Receive the data()

6 : exit()

Fig.8. Sender Collaboration Diagrams

47

Page 48: Multiple Packet Transmission

CHAPTER 5

Implementation

5.1 INTRODUCTION TO .NET FRAMEWORK

The Microsoft .NET Framework is a software technology that is available

with several Microsoft Windows operating systems. It includes a large library of pre-

coded solutions to common programming problems and a virtual machine that manages

48

Page 49: Multiple Packet Transmission

the execution of programs written specifically for the framework. The .NET Framework

is a key Microsoft offering and is intended to be used by most new applications created

for the Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a large

range of programming needs in a number of areas, including user interface, data access,

database connectivity, cryptography, web application development, numeric algorithms,

and network communications. The class library is used by programmers, who combine it

with their own code to produce applications.

Programs written for the .NET Framework execute in a software environment that

manages the program's runtime requirements. Also part of the .NET Framework, this

runtime environment is known as the Common Language Runtime (CLR). The CLR

provides the appearance of an application virtual machine so that programmers need not

consider the capabilities of the specific CPU that will execute the program. The CLR also

provides other important services such as security, memory management, and exception

handling. The class library and the CLR together compose the .NET Framework.

PRINCIPAL DESIGN FEATURE

Interoperability

Because interaction between new and older applications is commonly required,

the .NET Framework provides means to access functionality that is implemented in

programs that execute outside the .NET environment. Access to COM components is

provided in the System.Runtime.InteropServices and System.EnterpriseServices

namespaces of the framework; access to other functionality is provided using the

P/Invoke feature.

Common Runtime Engine 

The Common Language Runtime (CLR) is the virtual machine component of

the .NET framework. All .NET programs execute under the supervision of the CLR,

guaranteeing certain properties and behaviors in the areas of memory management,

security, and exception handling.

Base Class Library 

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a

library of functionality available to all languages using the .NET Framework. The BCL

provides classes which encapsulate a number of common functions, including file reading

and writing, graphic rendering, database interaction and XML document manipulation.

49

Page 50: Multiple Packet Transmission

Simplified Deployment 

Installation of computer software must be carefully managed to ensure that it does

not interfere with previously installed software, and that it conforms to security

requirements. The .NET framework includes design features and tools that help address

these requirements.

Security

The design is meant to address some of the vulnerabilities, such as buffer

overflows, that have been exploited by malicious software. Additionally, .NET provides a

common security model for all applications.

Portability 

The design of the .NET Framework allows it to theoretically be platform agnostic,

and thus cross-platform compatible. That is, a program written to use the framework

should run without change on any type of system for which the framework is

implemented. Microsoft's commercial implementations of the framework cover

Windows, Windows CE, and the Xbox 360. In addition, Microsoft submits the

specifications for the Common Language Infrastructure (which includes the core class

libraries, Common Type System, and the Common Intermediate Language), the C#

language, and the C++/CLI language to both ECMA and the ISO, making them available

as open standards. This makes it possible for third parties to create compatible

implementations of the framework and its languages on other platforms.

50

Page 51: Multiple Packet Transmission

Architecture

Visual overview of the Common Language Infrastructure (CLI)

Common Language Infrastructure

Fig.1.architecture of CLI Diagram

The core aspects of the .NET framework lie within the Common Language

Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform

for application development and execution, including functions for exception handling,

garbage collection, security, and interoperability. Microsoft's implementation of the CLI

is called the Common Language Runtime or CLR.

51

Page 52: Multiple Packet Transmission

Assemblies

The intermediate CIL code is housed in .NET assemblies. As mandated by

specification, assemblies are stored in the Portable Executable (PE) format, common on

the Windows platform for all DLL and EXE files. The assembly consists of one or more

files, one of which must contain the manifest, which has the metadata for the assembly.

The complete name of an assembly (not to be confused with the filename on disk)

contains its simple text name, version number, culture, and public key token. The public

key token is a unique hash generated when the assembly is compiled, thus two assemblies

with the same public key token are guaranteed to be identical from the point of view of

the framework. A private key can also be specified known only to the creator of the

assembly and can be used for strong naming and to guarantee that the assembly is from

the same author when a new version of the assembly is compiled (required to add an

assembly to the Global Assembly Cache).

Metadata

All CLI is self-describing through .NET metadata. The CLR checks the metadata

to ensure that the correct method is called. Metadata is usually generated by language

compilers but developers can create their own metadata through custom attributes.

Metadata contains information about the assembly, and is also used to implement the

reflective programming capabilities of .NET Framework.

Security

.NET has its own security mechanism with two general features: Code Access

Security (CAS), and validation and verification. Code Access Security is based on

evidence that is associated with a specific assembly. Typically the evidence is the source

of the assembly (whether it is installed on the local machine or has been downloaded

from the intranet or Internet). Code Access Security uses evidence to determine the

permissions granted to the code. Other code can demand that calling code is granted a

specified permission. The demand causes the CLR to perform a call stack walk: every

assembly of each method in the call stack is checked for the required permission; if any

assembly is not granted the permission a security exception is thrown.

When an assembly is loaded the CLR performs various tests. Two such tests are

validation and verification. During validation the CLR checks that the assembly contains

valid metadata and CIL, and whether the internal tables are correct. Verification is not so

exact. The verification mechanism checks to see if the code does anything that is 'unsafe'.

The algorithm used is quite conservative; hence occasionally code that is 'safe' does not

52

Page 53: Multiple Packet Transmission

pass. Unsafe code will only be executed if the assembly has the 'skip verification'

permission, which generally means code that is installed on the local machine.

.NET Framework uses appdomains as a mechanism for isolating code running in a

process. Appdomains can be created and code loaded into or unloaded from them

independent of other appdomains. This helps increase the fault tolerance of the

application, as faults or crashes in one appdomain do not affect rest of the application.

Appdomains can also be configured independently with different security privileges. This

can help increase the security of the application by isolating potentially unsafe code. The

developer, however, has to split the application into sub domains; it is not done by the

CLR.

Class library

Namespaces in the BCL

System

System. CodeDom

System. Collections

System. Diagnostics

System. Globalization

System. IO

System. Resources

System. Text

System.Text.RegularExpressions

Microsoft .NET Framework includes a set of standard class libraries. The class

library is organized in a hierarchy of namespaces. Most of the built in APIs are part of

either System.* or Microsoft.* namespaces. It encapsulates a large number of common

functions, such as file reading and writing, graphic rendering, database interaction, and

XML document manipulation, among others. The .NET class libraries are available to

all .NET languages. The .NET Framework class library is divided into two parts: the

Base Class Library and the Framework Class Library.

Versions

53

Page 54: Multiple Packet Transmission

Microsoft started development on the .NET Framework in the late 1990s

originally under the name of Next Generation Windows Services (NGWS). By late 2000

the first beta versions of .NET 1.0 were released.

Fig2.version .net frame work diagram

The .NET Framework stack.

Version Version Number Release Date

1.0 1.0.3705.0 2002-01-05

1.1 1.1.4322.573 2003-04-01

2.0 2.0.50727.42 2005-11-07

3.0 3.0.4506.30 2006-11-06

3.5 3.5.21022.8 2007-11-09

4.0 4.0.30319.1 2010-04-12

SERVER APPLICATION DEVELOPMENT

54

Page 55: Multiple Packet Transmission

Server-side applications in the managed world are implemented

through runtime hosts. Unmanaged applications host the common language runtime,

which allows your custom managed code to control the behavior of the server.

This model provides you with all the features of the common language

runtime and class library while gaining the performance and scalability of the host server.

The following illustration shows a basic network schema with managed

code running in different server environments. Servers such as IIS and SQL Server can

perform standard operations while your application logic executes through the managed

code.

Server-side managed code

Fig.3.server side managed code Diagram

ASP.NET is the hosting environment that enables developers to use the .NET Framework

to target Web-based applications. However, ASP.NET is more than just a runtime host; it

is a complete architecture for developing Web sites and Internet-distributed objects using

managed code. Both Web Forms and XML Web services use IIS and ASP.NET as the

publishing mechanism for applications, and both have a collection of supporting classes

in the .NET

5.2 C#.NET

The Relationship of C# to .NET

C# is a new programming language, and is significant in two respects:

It is specifically designed and targeted for use with

Microsoft's .NET Framework (a feature rich platform for the development,

deployment, and execution of distributed applications).

It is a language based upon the modern object-oriented design

methodology, and when designing it Microsoft has been able to learn from the

55

Page 56: Multiple Packet Transmission

experience of all the other similar languages that have been around over the 20

years or so since object-oriented principles came to prominence

One important thing to make clear is that C# is a language in its own right.

Although it is designed to generate code that targets the .NET environment, it is not itself

part of .NET. There are some features that are supported by .NET but not by C#, and you

might be surprised to learn that there are actually features of the C# language that are not

supported by .NET like Operator Overloading.

However, since the C# language is intended for use with .NET, it is important for

us to have an understanding of this Framework if we wish to develop applications in C#

effectively. So, in this chapter

The Common Language Runtime :

Central to the .NET framework is its run-time execution environment,

known as the Common Language Runtime (CLR) or the .NET runtime. Code running

under the control of the CLR is often termed managed code.

However, before it can be executed by the CLR, any source code that we develop

(in C# or some other language) needs to be compiled. Compilation occurs in two steps

in .NET:

1. Compilation of source code to Microsoft Intermediate Language

(MS-IL)

2. Compilation of IL to platform-specific code by the CLR

At first sight this might seem a rather long-winded compilation process. Actually,

this two-stage compilation process is very important, because the existence of the

Microsoft Intermediate Language (managed code) is the key to providing many of the

benefits of .NET. Let's see why.

Advantages of Managed Code

56

Page 57: Multiple Packet Transmission

Microsoft Intermediate Language (often shortened to "Intermediate

Language", or "IL") shares with Java byte code the idea that it is a low-level language

with a simple syntax (based on numeric codes rather than text), which can be very quickly

translated into native machine code. Having this well-definedUniversal syntax for code

has significant advantages.

Platform Independence

First, it means that the same file containing byte code instructions can be

placed on any platform; at runtime the final stage of compilation can then be easily

accomplished so that the code will run on that particular platform. In other words, by

compiling to Intermediate Language we obtain platform independence for .NET, in much

the same way as compiling to Java byte code gives Java platform independence.

You should note that the platform independence of .NET is only theoretical at

present because, at the time of writing, .NET is only available for Windows. However,

porting .NET to other platforms is being explored (see for example the Mono project, an

effort to create an open source implementation of .NET, at http://www.go-mono.com/).

Performance Improvement

Although we previously made comparisons with Java, IL is actually a bit

more ambitious than Java byte code. Significantly, IL is always Just-In-Time compiled,

whereas Java byte code was often interpreted. One of the disadvantages of Java was that,

on execution, the process of translating from Java byte code to native executable resulted

in a loss of performance (apart from in more recent cases, here Java is JIT-compiled on

certain platforms).

Instead of compiling the entire application in one go (which could lead to a slow

start-up time), the JIT compiler simply compiles each portion of code as it is called (just-

in-time). When code has been compiled once, the resultant native executable is stored

until the application exits, so that it does not need to be recompiled the next time that

portion of code is run. Microsoft argues that this process is more efficient than compiling

the entire application code at the start, because of the likelihood those large portions of

57

Page 58: Multiple Packet Transmission

any application code will not actually be executed in any given run. Using the JIT

compiler, such code will never get compiled.

This explains why we can expect that execution of managed IL code will be

almost as fast as executing native machine code. What it doesn't explain is why Microsoft

expects that we will get a performance improvement. The reason given for this is that,

since the final stage of compilation takes place at run time, the JIT compiler will know

exactly what processor type the program will run on. This means that it can optimize the

final executable code to take advantage of any features or particular machine code

instructions offered by that particular processor.

Language Interoperability

How the use of IL enables platform independence, and how JIT

compilation should improve performance. However, IL also facilitates language

interoperability. Simply put, you can compile to IL from one language, and this

compiled code should then be interoperable with code that has been compiled to IL from

another language.

Intermediate Language

From what we learned in the previous section, Intermediate Language

obviously plays a fundamental role in the .NET Framework. As C# developers, we now

understand that our C# code will be compiled into Intermediate Language before it is

executed (indeed, the C# compiler only compiles to managed code). It makes sense, then,

that we should now take a closer look at the main characteristics of IL, since any

language that targets .NET would logically need to support the main characteristics of IL

too.

Support of Object Orientation and Interfaces

The language independence of .NET does have some practical limits. In

particular, IL, however it is designed, is inevitably going to implement some particular

programming methodology, which means that languages targeting it are going to have to

be compatible with that methodology. The particular route that Microsoft has chosen to

follow for IL is that of classic object-oriented programming, with single implementation

inheritance of classes.

Besides classic object-oriented programming, Intermediate Language also brings

in the idea of interfaces, which saw their first implementation under Windows with COM.

58

Page 59: Multiple Packet Transmission

.NET interfaces are not the same as COM interfaces; they do not need to support any of

the COM infrastructure (for example, they are not derived from I Unknown, and they do

not have associated GUIDs). However, they do share with

COM interfaces the idea that they provide a contract, and classes that implement a

given interface must provide implementations of the methods and properties specified by

that interface.

Object Orientation and Language Interoperability

Working with .NET means compiling to the Intermediate Language, and

that in turn means that you will need to be programming using traditional object-oriented

methodologies. That alone is not, however, sufficient to give us language interoperability.

After all, C++ and Java both use the same object-oriented paradigms, but they are still not

regarded as interoperable. We need to look a little more closely at the concept of language

interoperability.

An associated problem was that, when debugging, you would still have to

independently debug components written in different languages. It was not possible to

step between languages in the debugger. So what we really mean by language

interoperability is that classes written in one language should be able to talk directly to

classes written in another language. In particular:

A class written in one language can inherit from a class

written in another language

The class can contain an instance of another class, no

matter what the languages of the two classes are

An object can directly call methods against another object

written in another language

Objects (or references to objects) can be passed around

between methods

When calling methods between languages we can step

between the method calls in the

Debugger, even where this means stepping between source code

written in different languages

59

Page 60: Multiple Packet Transmission

This is all quite an ambitious aim, but amazingly, .NET and the Intermediate

Language have achieved it. For the case of stepping between methods in the debugger,

this facility is really offered by the Visual Studio .NET IDE rather than from the CLR

itself.

Common Type System (CTS)

This data type problem is solved in .NET through the use of the Common Type

System (CTS). The CTS defines the predefined data types that are available in IL, so that

all languages that target the .NET framework will produce compiled code that is

ultimately based on these types.

The CTS doesn't merely specify primitive data types, but a rich hierarchy of types,

which includes well-defined points in the hierarchy at which code is permitted to define

its own types. The hierarchical structure of the Common Type System reflects the single-

inheritance object-oriented methodology of IL, and looks like this:

Fig1.Cmmon Type System Diagram

Common Language Specification (CLS)

60

Page 61: Multiple Packet Transmission

The Common Language Specification works with the Common Type

System to ensure language interoperability. The CLS is a set of minimum standards that

all compilers targeting .NET must support. Since IL is a very rich language, writers of

most compilers will prefer to restrict the capabilities of a given compiler to only support a

subset of the facilities offered by IL and the CTS. That is fine, as long as the compiler

supports everything that is defined in the CLS.

Garbage Collection

The garbage collector is .NET's answer to memory management, and in

particular to the question of what to do about reclaiming memory that running

applications ask for. Up until now there have been two techniques used on Windows

platform for deal locating memory that processes have dynamically requested from the

system:

Make the application code do it all manually

Make objects maintain reference counts

The .NET runtime relies on the garbage collector instead. This is a program whose

purpose is to clean up memory. The idea is that all dynamically requested memory is

allocated on the heap (that is true for all languages, although in the case of .NET, the CLR

maintains its own managed heap for .NET applications to use). Every so often,

when .NET detects that the managed heap for a given process is becoming full and

therefore needs tidying up, it calls the garbage collector. The garbage collector runs

through variables currently in scope in your code, examining references to objects stored

on the heap to identify which ones are accessible from your code – that is to say which

objects have references that refer to them. Any objects that are not referred to are deemed

to be no longer accessible from your code and can therefore be removed. Java uses a

similar system of garbage collection to this.

Security

61

Page 62: Multiple Packet Transmission

.NET can really excel in terms of complementing the security mechanisms

provided by Windows because it can offer code-based security, whereas Windows only

really offers role-based security.

Role-based security is based on the identity of the account under which the process is

running, in other words, who owns and is running the process. Code-based security on the

other hand is based on what the code actually does and on how much the code is trusted.

Thanks to the strong type safety of IL, the CLR is able to inspect code before running it in

order to determine required security permissions. .NET also offers a mechanism by which

code can indicate in advance what security permissions it will require to run.

. Net Framework Classes

The .NET base classes are a massive collection of managed code classes

that have been written by Microsoft, and which allow you to do almost any of the tasks

that were previously available through the Windows API. These classes follow the same

object model as used by IL, based on single inheritance. This means that you can either

instantiate objects of whichever .NET base class is appropriate, or you can derive your

own classes from them.

The great thing about the .NET base classes is that they have been designed to be

very intuitive and easy to use. For example, to start a thread, you call the Start () method

of the Thread class. To disable a Textbox, you set the Enabled property of a Textbox

object to false. This approach will be familiar to Visual Basic and Java developers, whose

respective libraries are just as easy to use. It may however come as a great relief to C++

developers, who for years have had to cope with such API functions as GetDIBits (),

RegisterWndClassEx (), and IsEqualIID (), as well as a whole plethora of functions that

required Windows handles to be passed around.

Name Spaces

Namespaces are the way that .NET avoids name clashes between classes.

They are designed, for example, to avoid the situation in which you define a class to

represent a customer, name your class Customer, and then someone else does the same

62

Page 63: Multiple Packet Transmission

thing (quite a likely scenario – the proportion of businesses that have customers seems to

be quite high).

A namespace is no more than a grouping of data types, but it has the effect that the

names of all data types within a namespace automatically get prefixed with the name of

the namespace. It is also possible to nest namespaces within each other. For example,

most of the general-purpose .NET base classes are in a namespace called System. The

base class Array is in this namespace, so its full name is System. Array.

If a namespace is explicitly supplied, then the type will be added to a nameless

global namespace.

The Role of C# In .Net Enterprise Architecture

C# requires the presence of the .NET runtime, and it will probably be a

few years before most clients – particularly most home machines – have .NET installed.

In the meantime, installing a C# application is likely to mean also installing the .NET

redistributable components. Because of that, it is likely that the first place we will see

many C# applications is in the enterprise environment. Indeed, C# arguably presents an

outstanding opportunity for organizations that are interested in building robust, n-tiered

client-server applications.

63

Page 64: Multiple Packet Transmission

Fig.2. the role of .Net Enterprise Architectecture

64

Page 65: Multiple Packet Transmission

CHAPTER 6

Testing

6. SYSTEM TESTING AND IMPLEMENTATION

65

Page 66: Multiple Packet Transmission

Software testing is a critical element of software quality assurance and represents

the ultimate review of specification, design and coding. In fact, testing is the one step in

the software engineering process that could be viewed as destructive rather than

constructive.

A strategy for software testing integrates software test case design methods into a

well-planned series of steps that result in the successful construction of software. Testing

is the set of activities that can be planned in advance and conducted systematically. The

underlying motivation of program testing is to affirm software quality with methods that

can economically and effectively apply to both strategic to both large and small-scale

systems.

6.1. STRATEGIC APPROACH TO SOFTWARE TESTING

The software engineering process can be viewed as a spiral. Initially system

engineering defines the role of software and leads to software requirement analysis where

the information domain, functions, behavior, performance, constraints and validation

criteria for software are established. Moving inward along the spiral, we come to design

and finally to coding. To develop computer software we spiral in along streamlines that

decrease the level of abstraction on each turn.

A strategy for software testing may also be viewed in the context of the spiral.

Unit testing begins at the vertex of the spiral and concentrates on each unit of the software

as implemented in source code. Testing progress by moving outward along the spiral to

integration testing, where the focus is on the design and the construction of the software

architecture. Talking another turn on outward on the spiral we encounter validation

testing where requirements established as part of software requirements analysis are

validated against the software that has been constructed. Finally we arrive at system

testing, where the software and other system elements are tested as a whole.

66

Page 67: Multiple Packet Transmission

Fig.1.Unit Testing Diagram

6.2. UNIT TESTING

Unit testing focuses verification effort on the smallest unit of software design, the

module. The unit testing we have is white box oriented and some modules the steps are

conducted in parallel.

1. WHITE BOX TESTING

This type of testing ensures that

All independent paths have been exercised at least once

All logical decisions have been exercised on their true and false sides

All loops are executed at their boundaries and within their operational bounds

All internal data structures have been exercised to assure their validity.

67

UNIT TESTING

MODULE TESTING

SUB-SYSTEM TESING

SYSTEM TESTING

ACCEPTANCE TESTING

Component Testing

Integration Testing

User Testing

Page 68: Multiple Packet Transmission

To follow the concept of white box testing we have tested each form .we have

created independently to verify that Data flow is correct, All conditions are exercised to

check their validity, All loops are executed on their boundaries.

2. BASIC PATH TESTING

Established technique of flow graph with Cyclomatic complexity was used to derive test

cases for all the functions. The main steps in deriving test cases were:

Use the design of the code and draw correspondent flow graph.

Determine the Cyclomatic complexity of resultant flow graph, using formula:

V(G)=E-N+2 or

V(G)=P+1 or

V (G) =Number Of Regions

Where V (G) is Cyclomatic complexity,

E is the number of edges,

N is the number of flow graph nodes,

P is the number of predicate nodes.

Determine the basis of set of linearly independent paths.

3. CONDITIONAL TESTING

In this part of the testing each of the conditions were tested to both true and false

aspects. And all the resulting paths were tested. So that each path that may be generate on

particular condition is traced to uncover any possible errors.

4. DATA FLOW TESTING

This type of testing selects the path of the program according to the location of

definition and use of variables. This kind of testing was used only when some local

variable were declared. The definition-use chain method was used in this type of testing.

These were particularly useful in nested statements.

5. LOOP TESTING

In this type of testing all the loops are tested to all the limits possible. The following

exercise was adopted for all loops:

All the loops were tested at their limits, just above them and just below them.

68

Page 69: Multiple Packet Transmission

All the loops were skipped at least once.

For nested loops test the inner most loop first and then work outwards.

For concatenated loops the values of dependent loops were set with the help of

connected loop.

Unstructured loops were resolved into nested loops or concatenated loops and tested

as above.

6.3.SYSTEM SEQURITY

6.3.1. INTRODUCTION

The protection of computer based resources that includes hardware, software,

data, procedures and people against unauthorized use or natural

Disaster is known as System Security.

System Security can be divided into four related issues:

Security

Integrity

Privacy

Confidentiality

SYSTEM SECURITY refers to the technical innovations and procedures applied to the

hardware and operation systems to protect against deliberate or accidental damage from a

defined threat.

DATA SECURITY is the protection of data from loss, disclosure, modification and

destruction.

SYSTEM INTEGRITY refers to the power functioning of hardware and programs,

appropriate physical security and safety against external threats such as eavesdropping

and wiretapping.

PRIVACY defines the rights of the user or organizations to determine what information

they are willing to share with or accept from others and how the organization can be

protected against unwelcome, unfair or excessive dissemination of information about it.

69

Page 70: Multiple Packet Transmission

CONFIDENTIALITY is a special status given to sensitive information in a database to

minimize the possible invasion of privacy. It is an attribute of information that

characterizes its need for protection.

6. 3.2. SECURITY SOFTWARE

It is the technique used for the purpose of converting communication. It

transfers message secretly by embedding it into a cover medium with the use of

information hiding techniques. It is one of the conventional techniques capable of hiding

large secret message in a cover image without introducing many perceptible distortions.

NET has two kinds of security:

Role Based Security

Code Access Security

The Common Language Runtime (CLR) allows code to perform only those

operations that the code has permission to perform. So CAS is the CLR's security system

that enforces security policies by preventing unauthorized access to protected resources

and operations. Using the Code Access Security, you can do the following:

Restrict what your code can do

Restrict which code can call your code

Identify code

70

Page 71: Multiple Packet Transmission

6.4.Test cases

ENHANCING DOWNLINK PERFORMANCE IN WIRELESS NETWORKS BY SIMULTANEOUS MULTIPLE PACKET TRANSMISSION

Screen Name:

Module Names: authentication encryption, decryption e

Test Case

Number

Designer

Name

Tester

NameParticulars

Expected

Output

Actual

OutputResult Remarks

1_server_page madhukar madhukar

click on click

here to scan

must

display

system IP

address

Displayed

computer

name &

computer IP

pass

 

2_server_page

Madhukar madhukar

click on clear

button

must clear

the

selected

systems

cleared the

selected

systems

pass

 

1_client_page

Madhukar madhukar

click on file path

selection

must

display the

window to

select the

region

displayed

the window

to select the

region pa

ss  

2_client_page

Madhukar Madhukar

click on start

server button

must

connect to

the server

connected

to the server pa

ss  

1_simul_filetrnsfr_page

Madhukar Madhukar

click on

Simultaneous

file transfer

button

must

display

system IP

address

Displayed

computer

name &

computer IP pa

ss  

_

2_simul_filetrnsfr_page

madhukar Madhukar

click on start

server button

must

connect to

the server

connected

to the server pa

ss  

71

Page 72: Multiple Packet Transmission

CHAPTER 7

Screen shots

72

Page 73: Multiple Packet Transmission

7.1. SCREEN1

73

Page 74: Multiple Packet Transmission

7.2. SCREEN2

74

Page 75: Multiple Packet Transmission

7.3. SCREEN3

75

Page 76: Multiple Packet Transmission

7.4. SCREEN4

76

Page 77: Multiple Packet Transmission

7.5. SCREEN5

77

Page 78: Multiple Packet Transmission

CHAPTER 8

CONCLUSION

&

FUTURE WORK

78

Page 79: Multiple Packet Transmission

CONCLUSION & FUTURE WORK

In this paper, we have considered using MPT to improve the downlink

performance of the wireless LANs. With MPT, the AP can send two compatible packets

simultaneously to two distinct users. We have formalized the problem of finding a

minimum time schedule as a matching problem and have given a practical linear time

algorithm that finds a matching of at least 3/4 the size of a maximum matching. We

studied the performance of wireless LAN after it was enhanced with MPT. We gave

analytical bounds for maximum allowable arrival rate, which measures the speedup of the

downlink, and our results show that the maximum arrival rate increases significantly even

with a very small compatibility probability. We also used an approximate analytical

model and simulations to study the average packet delay, and our results show that packet

delay can be greatly reduced even with a very small compatibility probability.

FUTURE IMPROVEMENT

The Steganography is used to cover the information. It also avoids the

information without any perceptible distortions. In future this process can not only be

used in images alone it can use in video formats and also in audio formats to enhance in

future.

79

Page 80: Multiple Packet Transmission

Bibliography

80

Page 81: Multiple Packet Transmission

[1] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.

Cambridge Univ. Press, May 2005.

[2] D.B. West, Introduction to Graph Theory. Prentice-Hall, 1996.

[3] T. Lang, V. Naware, and P. Venkitasubramaniam, “Signal Processing in

Random Access,” IEEE Signal Processing Magazine, vol. 21, no. 5, pp. 29-39, Sept.

2004.

[4] G. Dimic, N.D. Sidiropoulos, and R. Zhang, “Medium Access Control—

Physical Cross-Layer Design,” IEEE Signal Processing Magazine, vol. 21, no. 5, pp. 40-

50, Sept. 2004.

[5] Q. Liu, S. Zhou, and G.B. Giannakis, “Cross-Layer Scheduling with

Prescribed QoS Guarantees in Adaptive Wireless Networks,” IEEE J. Selected Areas in

Comm., vol. 23, no. 5, pp. 1056-1066, May 2005.

[6] V. Kawadia and P.R. Kumar, “Principles and Protocols for Power Control in

Wireless Ad Hoc Networks,” IEEE J. Selected Areas in Comm., vol. 23, no. 1, pp. 76-88,

Jan. 2005.

[7] A. Czygrinow, M. Hanckowiak, and E. Szymanska, “A Fast Distributed

Algorithm for Approximating the Maximum Matching,” Proc. 12th Ann. European Symp.

Algorithms (ESA ’04), pp. 252-263, 2004.

[8] http://grouper.ieee.org/groups/802/11/, 2008.

81

Page 82: Multiple Packet Transmission

WEBSITES

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4432627

http://portal.acm.org/citation.cfm?id=1460678

http://aispa.in/image.php

http://www.scitopia.org/topicpages/m/multi-media+mining.html

http://crest.fiu.edu/Publications.php

http://hyderabad.olx.in/users/nkrieee

http://www.visionbib.com/bibliography/motion-f727.html

http://www.docstoc.com/docs/74661735/IEEE-Java-Final-Year-Projects

http://pravalika.com/btech-mca-java-academic-major-projects/

http://www.72formula.com/e-project.html

82


Recommended