+ All Categories
Home > Documents > Measuring Capacity Bandwidth of Targeted

Measuring Capacity Bandwidth of Targeted

Date post: 26-Dec-2014
Category:
Upload: beverly1o1
View: 159 times
Download: 2 times
Share this document with a friend
63
Measuring Capacity Bandwidth of Targeted Path Segments Abstract Accurate measurement of network bandwidth is important for network management applications as well as flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. Extensive work has focused on two approaches to measuring bandwidth: measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper,we develop end-to-end probing methods which can measure bottleneck capacity bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows.We evaluate our technique through ns simulations, then provide a comparative Internet performance evaluation against hop-by-hop and end-to-end techniques. We also describe a number of applications which we foresee as standing to benefit from solutions to this problem, ranging from network troubleshooting and capacity provisioning to optimizing the layout of application-level overlay networks, to optimized replica placement.
Transcript
Page 1: Measuring Capacity Bandwidth of Targeted

Measuring Capacity Bandwidth of Targeted Path Segments

Abstract

Accurate measurement of network bandwidth is important for network management applications as well as flexible Internet applications and protocols which actively manage and dynamically adapt to changing utilization of network resources. Extensive work has focused on two approaches to measuring bandwidth:measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, best-practice techniques for the former are inefficient and techniques for the latter are only able to observe bottlenecks visible at end-to-end scope. In this paper,we develop end-to-end probing methods which can measure bottleneck capacity bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths shared by a set of flows.We evaluate our technique through ns simulations, then provide a comparative Internet performance evaluation against hop-by-hop and end-to-end techniques. We also describe a number of applicationswhich we foresee as standing to benefit from solutions to this problem, ranging from network troubleshooting and capacity provisioning to optimizing the layout of application-level overlay networks, to optimized replica placement.

Page 2: Measuring Capacity Bandwidth of Targeted

INTRODUCTION

MEASUREMENT of network bandwidth is important for many Internet applications and protocols, especially those involving the transfer of large files and those involving the delivery of content with real-time QoS constraints, such asstreaming media. Some specific examples of applications which can leverage accurate bandwidth estimation include end-system multicast and overlay network configuration protocols, content location and delivery in peer-to-peer (P2P) networks , network-aware cache or replica placement policies , and flow scheduling and admission control policies at massively-accessed content servers. In addition, accurate measurements of network bandwidth are useful to network operators concerned with problems such as capacity provisioning, traffic engineering, network troubleshooting and verification of service level agreements (SLAs). An efficient end-to-end measurement technique that yields the capacity bandwidth of an arbitrary subpath of a route between a set of end-points. By subpath, we mean a sequence of consecutive network links between any two identifiable nodes on that path. A node on a path between a source and a destination is identifiable if it is possible to coerce a packet injected at the source to exit the path at node . One can achieve this by: 1) targeting the packet to (if ’s IPaddress is known), or 2) forcing the packet to stop at through the use of TTL field (if the hopcount from to is known),or 3) by targeting the packet to a destination , such that the paths from to and from to are known diverge at node. Our methods are much less resource-intensive than existing hop-by-hop methods for estimating bandwidth along a path and much more general than end-to-end methods for measuring capacity bandwidth. In particular, our method provides the followingadvantages over existing techniques: 1) it can estimate bandwidth on links not visible at end-to-end scope, and 2) it can measure the bandwidth of fast links following slow links as long as the ratio between the link speeds does not exceed the ratio between the largest and the smallest possible packet sizes that could be transmitted over these links.

Page 3: Measuring Capacity Bandwidth of Targeted

Existing System:

Our methods are much less resource-intensive than existing hop-by-hop methods for estimating bandwidth along a path and much more general than end-to-end methods for measuring capacity bandwidth.

we present results of Internet experiments we have conducted to validate our capacity bandwidth measurement techniques and compare its performance and efficiency an existing hop-by-hop techniques .

Our method provides the following advantages over existing techniques:

1) it can estimate bandwidth on links not visible at end-to-end scope, and

2) it can measure the bandwidth of fast links following slow links as long as the ratio between the link speeds does not exceed the ratio between the largest and the smallest possible packet sizes that could be transmitted over these links.

Page 4: Measuring Capacity Bandwidth of Targeted

Proposed System:

While collecting measurements only at the endpoints, our proposed measurement techniques are able to provide bandwidth estimates along arbitrary segments of apath, which is inherently different from other techniques.

The probing techniques we will propose can be classified as packet-bunch probes with non-uniform packet sizes.

we propose an efficient end-to-end measurement technique that yields the capacity bandwidth of an arbitrary subpath of a route between a set of end-points.

While collecting measurements only at the endpoints, our proposed measurement techniques are able to provide bandwidth estimates along arbitrary segments of apath, which is inherently different from other techniques.

Our proposed techniques do not fall into the packet-pair techniques category and the impact of layer-2 headers on our techniques can be contained by appropriate sizing of our probing structures.

Page 5: Measuring Capacity Bandwidth of Targeted

System Requirements:

Hardware Requirements:

• System : Pentium IV 2.4 GHz.

• Hard Disk : 40 GB.

• Floppy Drive : 1.44 Mb.

• Monitor : 15 VGA Colour.

• Mouse : Logitech.

• Ram : 256 Mb.

Software Requirements:

• Operating system : - Windows XP Professional.

• Coding Language : - Java.

• Tool Used : - Eclipse.

Page 6: Measuring Capacity Bandwidth of Targeted

SDLC METHDOLOGIES

This document play a vital role in the development of life cycle (SDLC) as it

describes the complete requirement of the system. It means for use by

developers and will be the basic during testing phase. Any changes made to the

requirements in the future will have to go through formal change approval

process.

SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral

Model of Software Development and Enhancement. This model was not the

first model to discuss iterative development, but it was the first model to

explain why the iteration models.

As originally envisioned, the iterations were typically 6 months to 2 years long.

Each phase starts with a design goal and ends with a client reviewing the

progress thus far. Analysis and engineering efforts are applied at each phase of

the project, with an eye toward the end goal of the project.

The steps for Spiral Model can be generalized as follows:

The new system requirements are defined in as much details as possible.

This usually involves interviewing a number of users representing all the

external or internal users and other aspects of the existing system.

A preliminary design is created for the new system.

A first prototype of the new system is constructed from the preliminary

design. This is usually a scaled-down system, and represents an

approximation of the characteristics of the final product.

A second prototype is evolved by a fourfold procedure:

Page 7: Measuring Capacity Bandwidth of Targeted

1. Evaluating the first prototype in terms of its strengths, weakness,

and risks.

2. Defining the requirements of the second prototype.

3. Planning an designing the second prototype.

4. Constructing and testing the second prototype.

At the customer option, the entire project can be aborted if the risk is

deemed too great. Risk factors might involve development cost overruns,

operating-cost miscalculation, or any other factor that could, in the

customer’s judgment, result in a less-than-satisfactory final product.

The existing prototype is evaluated in the same manner as was the

previous prototype, and if necessary, another prototype is developed from

it according to the fourfold procedure outlined above.

The preceding steps are iterated until the customer is satisfied that the

refined prototype represents the final product desired.

The final system is constructed, based on the refined prototype.

The final system is thoroughly evaluated and tested. Routine

maintenance is carried on a continuing basis to prevent large scale

failures and to minimize down time.

Page 8: Measuring Capacity Bandwidth of Targeted

The following diagram shows how a spiral model acts like:

Fig 1.0-Spiral Model

Page 9: Measuring Capacity Bandwidth of Targeted

ADVANTAGES:

Estimates(i.e. budget, schedule etc .) become more relistic as work

progresses, because important issues discoved earlier.

It is more able to cope with the changes that are software development

generally entails.

Software engineers can get their hands in and start woring on the core of

a project earlier.

APPLICATION DEVELOPMENT

N-TIER APPLICATIONS

N-Tier Applications can easily implement the concepts of Distributed Application

Design and Architecture. The N-Tier Applications provide strategic benefits to

Enterprise Solutions. While 2-tier, client-server can help us create quick and easy

solutions and may be used for Rapid Prototyping, they can easily become a

maintenance and security night mare

The N-tier Applications provide specific advantages that are vital to the business

continuity of the enterprise. Typical features of a real life n-tier may include the

following:

Security

Availability and Scalability

Manageability

Easy Maintenance

Data Abstraction

Page 10: Measuring Capacity Bandwidth of Targeted

The above mentioned points are some of the key design goals of a successful n-tier

application that intends to provide a good Business Solution.

DEFINITION

Simply stated, an n-tier application helps us distribute the overall functionality into

various tiers or layers:

Presentation Layer

Business Rules Layer

Data Access Layer

Database/Data Store

Each layer can be developed independently of the other provided that it adheres to

the standards and communicates with the other layers as per the specifications.

This is the one of the biggest advantages of the n-tier application. Each layer can

potentially treat the other layer as a ‘Block-Box’.

In other words, each layer does not care how other layer processes the data as long

as it sends the right data in a correct format.

Page 11: Measuring Capacity Bandwidth of Targeted

Fig 1.1-N-Tier Architecture

1. THE PRESENTATION LAYER

Also called as the client layer comprises of components that are dedicated to

presenting the data to the user. For example: Windows/Web Forms and

buttons, edit boxes, Text boxes, labels, grids, etc.

2. THE BUSINESS RULES LAYER

This layer encapsulates the Business rules or the business logic of the

encapsulations. To have a separate layer for business logic is of a great

advantage. This is because any changes in Business Rules can be easily

handled in this layer. As long as the interface between the layers remains the

same, any changes to the functionality/processing logic in this layer can be

made without impacting the others. A lot of client-server apps failed to

Page 12: Measuring Capacity Bandwidth of Targeted

implement successfully as changing the business logic was a painful

process.

3. THE DATA ACCESS LAYER

This layer comprises of components that help in accessing the Database. If

used in the right way, this layer provides a level of abstraction for the

database structures. Simply put changes made to the database, tables, etc do

not affect the rest of the application because of the Data Access layer. The

different application layers send the data requests to this layer and receive

the response from this layer.

4. THE DATABASE LAYER

This layer comprises of the Database Components such as DB Files, Tables,

Views, etc. The Actual database could be created using SQL Server, Oracle,

Flat files, etc.

In an n-tier application, the entire application can be implemented in such a

way that it is independent of the actual Database. For instance, you could

change the Database Location with minimal changes to Data Access Layer.

The rest of the Application should remain unaffected.

Page 13: Measuring Capacity Bandwidth of Targeted

FEASIBILITY STUDY

TECHNICAL FEASIBILITY :

Evaluating the technical feasibility is the trickiest part of a feasibility study.

This is because , at this point in time, not too many detailed design of the system,

making it difficult to access issues like performance, costs on ( account of the kind

of technology to be deployed) etc. A number of issues have to be considered while

doing a technicalanalysis.

i) Understand the different technologies involved in the proposed system :

Before commencing the project, we have to be very clear about what are the

technologies that are to be required for the development of the new system.

ii) Find out whether the organization currently possesses the required

technologies:

o Is the required technology available with the organization?

o If so is the capacity sufficient?

For instance –

“Will the current printer be able to handle the new reports and forms

required for the new system?”

Page 14: Measuring Capacity Bandwidth of Targeted

ECONOMIC FEASIBILITY :

Economic feasibility attempts 2 weigh the costs of developing and

implementing a new system, against the benefits that would accrue from having the

new system in place. This feasibility study gives the top management the economic

justification for the new system.

A simple economic analysis which gives the actual comparison of costs and

benefits are much more meaningful in this case. In addition, this proves to be a

useful point of reference to compare actual costs as the project progresses. There

could be various types of intangible benefits on account of automation. These

could include increased customer satisfaction, improvement in product quality

better decision making timeliness of information, expediting activities, improved

accuracy of operations, better documentation and record keeping, faster retrieval of

information, better employee morale.

OPEARTIONAL FEASIBILITY :

Proposed projects are beneficial only if they can be turned into information

systems that will meet the organizations operating requirements. Simply stated,

this test of feasibility asks if the system will work when it is developed and

installed. Are there major barriers to Implementation? Here are questions that will

help test the operational feasibility of a project:

Page 15: Measuring Capacity Bandwidth of Targeted

Is there sufficient support for the project from management from users? If

the current system is well liked and used to the extent that persons will not

be able to see reasons for change, there may be resistance.

Are the current business methods acceptable to the user? If they are not,

Users may welcome a change that will bring about a more operational and

useful systems.

Have the user been involved in the planning and development of the project?

Early involvement reduces the chances of resistance to the system and in

General and increases the likelihood of successful project.

Since the proposed system was to help reduce the hardships

encountered. In the existing manual system, the new system was

considered to be operational feasible.

Modules:

Bandwidth MeasurementCatalyst ApplicationsActive techniquePassive technique

Modules Description:

Bandwidth Measurement:

Two different measures used in end-to-end network bandwidth estimation are capacity bandwidth, or the maximum transmission rate that could be achieved between two hosts at the endpoints of a given path in the absence of any competing traffic, and available bandwidth, the portion of the capacity bandwidth along a path that could be acquired by a given flow at a given instant in time. Both of these measures are important, and each captures different relevant properties of the network. Capacity bandwidth is a static baseline measure that applies over long time-scales (up to the time-scale at which network paths change), and is independent of the particular traffic dynamics at a time instant. Available bandwidth provides a dynamic measure of the load on a path, or more precisely,

Page 16: Measuring Capacity Bandwidth of Targeted

the residual capacity of a path. Additional applicationspecific information must then be applied before making meaningful use of either measure. While measures of available bandwidth are certainly more useful for control or optimization of processes operating at short time scales, processes operating at longer time scales (e.g., server selection or admission control) will find estimatesof both measures to be helpful. On the other hand, many network management applications (e.g., capacity provisioning) are concerned primarily with capacity bandwidth.

Catalyst Applications:

To exemplify the type of applications that can be leveraged by the identification of shared capacity bandwidth (or more generally, the capacity bandwidth of an arbitrary, targeted subpath). In the first scenario, a client must select two out of three sources to use to download data in parallel. This scenario may arise when downloading content in parallel from a subset of mirror sites or multicast sources or from a subset of peer nodes in P2P environments . In the second scenario, an overlay network must be set up between a single source and two destinations. This scenario may arise in ad-hoc networks and end-system multicast systems.

Active technique: This technique comprising most of the work in the literature, send probes for the sole purpose of bandwidth measurement.

Passive technique: This technique rely on data packets for probing which uses a packet-pair technique at the transport level to passively estimate capacity link bandwidth.

Page 17: Measuring Capacity Bandwidth of Targeted

Software Environment

Java Technology

Java technology is both a programming language and a platform.

The Java Programming Language

The Java programming language is a high-level language that can be

characterized by all of the following buzzwords:

Simple

Architecture neutral

Object oriented

Portable

Distributed

High performance

Interpreted

Multithreaded

Robust

Dynamic

Secure

With most programming languages, you either compile or interpret a

program so that you can run it on your computer. The Java programming language

Page 18: Measuring Capacity Bandwidth of Targeted

is unusual in that a program is both compiled and interpreted. With the compiler,

first you translate a program into an intermediate language called Java byte codes

—the platform-independent codes interpreted by the interpreter on the Java

platform. The interpreter parses and runs each Java byte code instruction on the

computer. Compilation happens just once; interpretation occurs each time the

program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the

Java Virtual Machine (Java VM). Every Java interpreter, whether it’s a

development tool or a Web browser that can run applets, is an implementation of

the Java VM. Java byte codes help make “write once, run anywhere” possible. You

can compile your program into byte codes on any platform that has a Java

compiler. The byte codes can then be run on any implementation of the Java VM.

That means that as long as a computer has a Java VM, the same program written in

the Java programming language can run on Windows 2000, a Solaris workstation,

or on an iMac.

Page 19: Measuring Capacity Bandwidth of Targeted

The Java Platform

A platform is the hardware or software environment in which a

program runs. We’ve already mentioned some of the most popular platforms

like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be

described as a combination of the operating system and hardware. The Java

platform differs from most other platforms in that it’s a software-only

platform that runs on top of other hardware-based platforms.

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

You’ve already been introduced to the Java VM. It’s the base for the Java

platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components

that provide many useful capabilities, such as graphical user interface (GUI)

widgets. The Java API is grouped into libraries of related classes and

interfaces; these libraries are known as packages. The next section, What

Page 20: Measuring Capacity Bandwidth of Targeted

Can Java Technology Do? Highlights what functionality some of the

packages in the Java API provide.

The following figure depicts a program that’s running on the Java

platform. As the figure shows, the Java API and the virtual machine insulate

the program from the hardware.

Native code is code that after you compile it, the compiled code runs

on a specific hardware platform. As a platform-independent environment,

the Java platform can be a bit slower than native code. However, smart

compilers, well-tuned interpreters, and just-in-time byte code compilers can

bring performance close to that of native code without threatening

portability.

What Can Java Technology Do?

The most common types of programs written in the Java programming

language are applets and applications. If you’ve surfed the Web, you’re

probably already familiar with applets. An applet is a program that adheres

to certain conventions that allow it to run within a Java-enabled browser.

However, the Java programming language is not just for writing cute,

entertaining applets for the Web. The general-purpose, high-level Java

programming language is also a powerful software platform. Using the

generous API, you can write many types of programs.

Page 21: Measuring Capacity Bandwidth of Targeted

An application is a standalone program that runs directly on the Java

platform. A special kind of application known as a server serves and

supports clients on a network. Examples of servers are Web servers, proxy

servers, mail servers, and print servers. Another specialized program is a

servlet. A servlet can almost be thought of as an applet that runs on the

server side. Java Servlets are a popular choice for building interactive web

applications, replacing the use of CGI scripts. Servlets are similar to applets

in that they are runtime extensions of applications. Instead of working in

browsers, though, servlets run within Java Web servers, configuring or

tailoring the server.

How does the API support all these kinds of programs? It does so with

packages of software components that provides a wide range of

functionality. Every full implementation of the Java platform gives you the

following features:

The essentials: Objects, strings, threads, numbers, input and output,

data structures, system properties, date and time, and so on.

Applets: The set of conventions used by applets.

Networking: URLs, TCP (Transmission Control Protocol), UDP

(User Data gram Protocol) sockets, and IP (Internet Protocol)

addresses.

Internationalization: Help for writing programs that can be localized

for users worldwide. Programs can automatically adapt to specific

locales and be displayed in the appropriate language.

Security: Both low level and high level, including electronic

signatures, public and private key management, access control, and

certificates.

Page 22: Measuring Capacity Bandwidth of Targeted

Software components: Known as JavaBeansTM, can plug into existing

component architectures.

Object serialization: Allows lightweight persistence and

communication via Remote Method Invocation (RMI).

Java Database Connectivity (JDBCTM): Provides uniform access to

a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility,

servers, collaboration, telephony, speech, animation, and more. The

following figure depicts what is included in the Java 2 SDK.

How Will Java Technology Change My Life?

We can’t promise you fame, fortune, or even a job if you learn the Java

programming language. Still, it is likely to make your programs better and

requires less effort than other languages. We believe that Java technology

will help you do the following:

Page 23: Measuring Capacity Bandwidth of Targeted

Get started quickly: Although the Java programming language is a

powerful object-oriented language, it’s easy to learn, especially for

programmers already familiar with C or C++.

Write less code: Comparisons of program metrics (class counts,

method counts, and so on) suggest that a program written in the Java

programming language can be four times smaller than the same

program in C++.

Write better code: The Java programming language encourages good

coding practices, and its garbage collection helps you avoid memory

leaks. Its object orientation, its JavaBeans component architecture,

and its wide-ranging, easily extendible API let you reuse other

people’s tested code and introduce fewer bugs.

Develop programs more quickly: Your development time may be as

much as twice as fast versus writing the same program in C++. Why?

You write fewer lines of code and it is a simpler programming

language than C++.

Avoid platform dependencies with 100% Pure Java: You can keep

your program portable by avoiding the use of libraries written in other

languages. The 100% Pure JavaTM Product Certification Program has a

repository of historical process manuals, white papers, brochures, and

similar materials online.

Write once, run anywhere: Because 100% Pure Java programs are

compiled into machine-independent byte codes, they run consistently

on any Java platform.

Distribute software more easily: You can upgrade applets easily

from a central server. Applets take advantage of the feature of

Page 24: Measuring Capacity Bandwidth of Targeted

allowing new classes to be loaded “on the fly,” without recompiling

the entire program.

Finally we decided to proceed the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database.

Java ha two things: a programming language and a platform.

Java is a high-level programming language that is all of the following

Simple Architecture-neutral

Object-oriented Portable

Distributed High-performance

Interpreted multithreaded

Robust Dynamic

Secure

Java is also unusual in that each Java program is both compiled and

interpreted. With a compile you translate a Java program into an

intermediate language called Java byte codes the platform-independent

code instruction is passed and run on the computer.

Compilation happens just once; interpretation occurs each time the

program is executed. The figure illustrates how this works.

Page 25: Measuring Capacity Bandwidth of Targeted

You can think of Java byte codes as the machine code instructions for

the Java Virtual Machine (Java VM). Every Java interpreter, whether it’s

a Java development tool or a Web browser that can run Java applets, is an

implementation of the Java VM. The Java VM can also be implemented

in hardware.

Java byte codes help make “write once, run anywhere” possible. You

can compile your Java program into byte codes on my platform that has a

Java compiler. The byte codes can then be run any implementation of the

Java VM. For example, the same Java program can run Windows NT,

Solaris, and Macintosh.

Java Program

Compilers

Interpreter

My Program

Page 26: Measuring Capacity Bandwidth of Targeted

Networking

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

TCP is a connection-oriented protocol; UDP (User Datagram Protocol)

is a connectionless protocol.

IP datagram’s

The IP layer provides a connectionless and unreliable delivery system.

It considers each datagram independently of the others. Any association

between datagram must be supplied by the higher layers. The IP layer

supplies a checksum that includes its own header. The header includes the

source and destination addresses. The IP layer handles routing through an

Page 27: Measuring Capacity Bandwidth of Targeted

Internet. It is also responsible for breaking up large datagram into smaller

ones for transmission and reassembling them at the other end.

UDP

UDP is also connectionless and unreliable. What it adds to IP is a

checksum for the contents of the datagram and port numbers. These are

used to give a client/server model - see later.

TCP

TCP supplies logic to give a reliable connection-oriented protocol

above IP. It provides a virtual circuit that two processes can use to

communicate.

Internet addresses

In order to use a service, you must be able to find it. The Internet uses

an address scheme for machines so that they can be located. The address is

a 32 bit integer which gives the IP address. This encodes a network ID and

more addressing. The network ID falls into various classes according to the

size of the network address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for

other addressing. Class B uses 16 bit network addressing. Class C uses 24

bit network addressing and class D uses all 32.

Page 28: Measuring Capacity Bandwidth of Targeted

Subnet address

Internally, the UNIX network is divided into sub networks. Building 11

is currently on one sub network and uses 10-bit addressing, allowing 1024

different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places

a limit of 256 machines that can be on the subnet.

Total address

The 32 bit address is usually written as 4 integers separated by dots.

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit

number. To send a message to a server, you send it to the port for that

service of the host that it is running on. This is not location transparency!

Certain of these ports are "well known".

Page 29: Measuring Capacity Bandwidth of Targeted

Sockets

A socket is a data structure maintained by the system to handle network

connections. A socket is created using the call socket. It returns an integer

that is like a file descriptor. In fact, under Windows, this handle can be

used with Read File and Write File functions.

#include <sys/types.h>#include <sys/socket.h>int socket(int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will

be zero, and type will depend on whether TCP or UDP is used. Two

processes wishing to communicate over a network create a socket each.

These are similar to two ends of a pipe - but the actual pipe does not yet

exist.

JFree Chart

JFreeChart is a free 100% Java chart library that makes it easy for

developers to display professional quality charts in their applications.

JFreeChart's extensive feature set includes:

A consistent and well-documented API, supporting a wide range of

chart types;

Page 30: Measuring Capacity Bandwidth of Targeted

A flexible design that is easy to extend, and targets both server-side

and client-side applications;

Support for many output types, including Swing components, image

files (including PNG and JPEG), and vector graphics file formats (including

PDF, EPS and SVG);

JFreeChart is "open source" or, more specifically, free software. It is

distributed under the terms of the GNU Lesser General Public Licence

(LGPL), which permits use in proprietary applications.

1. Map VisualizationsCharts showing values that relate to geographical areas. Some

examples include: (a) population density in each state of the United States,

(b) income per capita for each country in Europe, (c) life expectancy in each

country of the world. The tasks in this project include:

Sourcing freely redistributable vector outlines for the countries of the

world, states/provinces in particular countries (USA in particular, but also

other areas);

Creating an appropriate dataset interface (plus default

implementation), a rendered, and integrating this with the existing XYPlot

class in JFreeChart;

Testing, documenting, testing some more, documenting some more.

Page 31: Measuring Capacity Bandwidth of Targeted

2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts --- to

display a separate control that shows a small version of ALL the time series

data, with a sliding "view" rectangle that allows you to select the subset of the

time series data to display in the main chart.

3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible

dashboard mechanism that supports a subset of JFreeChart chart types (dials,

pies, thermometers, bars, and lines/time series) that can be delivered easily via

both Java Web Start and an applet.

4. Property EditorsThe property editor mechanism in JFreeChart only handles a small

subset of the properties that can be set for charts. Extend (or reimplement)

this mechanism to provide greater end-user control over the appearance of

the charts.

Page 32: Measuring Capacity Bandwidth of Targeted

UML DIAGRAMS:

The unified modeling language allows the software engineer to

express an analysis model using the modeling notation that is governed

by a set of syntactic semantic and pragmatic rules.

A UML system is represented using five different views that

describe the system from distinctly different perspective. Each view is

defined by a set of diagram, which is as follows.

User Model View

This view represents the system from the users perspective.

The analysis representation describes a usage scenario from the

end-users perspective.

Structural model view

In this model the data and functionality are arrived from

inside the system.

This model view models the static structures.

Behavioral Model View

It represents the dynamic of behavioral as parts of

the system, depicting the interactions of collection between various

structural elements described in the user model and structural

model view.

Page 33: Measuring Capacity Bandwidth of Targeted

Implementation Model View

In this the structural and behavioral as

parts of the system are represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in

which the system is to be implemented are represented.

SYSTEM TESTING

INTRODUCTION:

Testing is the process of detecting errors. Testing performs a very

critical role for quality assurance and for ensuring the reliability of

software. The results of testing are used later on during maintenance

also.

The aim of testing is often to demonstrate that a program works by

showing that it has no errors. The basic purpose of testing phase is to

detect the errors that may be present in the program. Hence one should

not start testing with the intent of showing that a program works, but the

intent should be to show that a program doesn’t work. Testing is the

process of executing a program with the intent of finding errors.

Page 34: Measuring Capacity Bandwidth of Targeted

Testing Objectives

The main objective of testing is to uncover a host of errors,

systematically and with minimum effort and time. Stating formally, we

can say,

Testing is a process of executing a program with the intent of

finding an error.

A successful test is one that uncovers an as yet undiscovered

error.

A good test case is one that has a high probability of finding

error, if it exists.

The tests are inadequate to detect possibly present errors.

Levels of Testing

In order to uncover the errors present in different phases we have

the concept of levels of testing. The basic levels of testing are as shown

below…

Client Needs

Requirements

Acceptance Testing

System Testing

Integration Testing

Unit Testing

Page 35: Measuring Capacity Bandwidth of Targeted

Design

Code

Testing Strategies:

A strategy for software testing integrates software test case design

methods into a well-planned series of steps that result in the successful

construction of software.

Unit Testing

Unit testing focuses verification effort on the smallest unit of

software i.e. the module. Using the detailed design and the process

specifications testing is done to uncover errors within the boundary of

the module. All modules must be successful in the unit test before the

start of the integration testing begins.

Unit Testing in this project : In this project each service can be thought

of a module. There are so many modules like Login, New Registration,

Change Password, Post Question, Modify Answer etc. When

Page 36: Measuring Capacity Bandwidth of Targeted

developing the module as well as finishing the development so that each

module works without any error. The inputs are validated when

accepting from the user.

TEST PLAN:

A number of activities must be performed for testing software.

Testing starts with test plan. Test plan identifies all testing related

activities that needed to be performed along with the schedule and

guidelines for testing. The plan also specifies the level of testing that

need to be done , by identifying the different units. For each unit

specifying in the plan first the test cases and reports are produced. These

reports are analyzed.

Test plan is a general document for entire project , which defines the

scope, approach to be taken and the personal responsible for different

activities of testing. The inputs for forming test plans are :

1. Project plan

2. Requirements document

3. System design

White Box Testing

Page 37: Measuring Capacity Bandwidth of Targeted

White Box Testing mainly focuses on the internal performance of

the product. Here a part will be taken at a time and tested thoroughly at

a statement level to find the maximum possible errors. Also construct a

loop in such a way that the part will be tested with in a range. That

means the part is execute at its boundary values and within bounds for

the purpose of testing.

White Box Testing in this Project : I tested step wise every piece of

code, taking care that every statement in the code is executed at least

once. I have generated a list of test cases, sample data, which is used to

check all possible combinations of execution paths through the code at

every module level.

Black Box Testing

This testing method considers a module as a single unit and checks

the unit at interface and communication with other modules rather

getting into details at statement level. Here the module will be treated as

a block box that will take some input and generate output. Output for a

given set of input combinations are forwarded to other modules.

Black Box Testing in this Project: I tested each and every module by

considering each module as a unit. I have prepared some set of input

combinations and checked the outputs for those inputs. Also I tested

Page 38: Measuring Capacity Bandwidth of Targeted

whether the communication between one module to other module is

performing well or not.

Integration Testing

After the unit testing we have to perform integration testing. The

goal here is to see if modules can be integrated properly or not. This

testing activity can be considered as testing the design and hence the

emphasis on testing module interactions. It also helps to uncover a set

of errors associated with interfacing. Here the input to these modules

will be the unit tested modules.

Integration testing is classifies in two types…

1. Top-Down Integration Testing.

2. Bottom-Up Integration Testing.

In Top-Down Integration Testing modules are integrated by

moving downward through the control hierarchy, beginning with the

main control module.

In Bottom-Up Integration Testing each sub module is tested

separately and then the full system is tested.

Page 39: Measuring Capacity Bandwidth of Targeted

Integration Testing in this project: In this project integrating all the

modules forms the main system. Means I used Bottom-Up Integration

Testing for this project. When integrating all the modules I have

checked whether the integration effects working of any of the services

by giving different combinations of inputs with which the two services

run perfectly before Integration.

System Testing

Project testing is an important phase without which the system

can’t be released to the end users. It is aimed at ensuring that all the

processes are according to the specification accurately.

System Testing in this project: Here entire ‘system’ has been tested

against requirements of project and it is checked whether all

requirements of project have been satisfied or not.

Alpha Testing

This refers to the system testing that is carried out by the test team

with the organization.

Beta Testing

This refers to the system testing that is performed by a select group

of friendly customers.

Acceptance Testing

Page 40: Measuring Capacity Bandwidth of Targeted

Acceptance Test is performed with realistic data of the client to

demonstrate that the software is working satisfactorily. Testing here is

focused on external behavior of the system; the internal logic of program

is not emphasized.

Acceptance Testing in this project: In this project I have collected

some data that was belongs to the University and tested whether project

is working correctly or not.

CONCLUSION:

We have described an end-to-end probing technique which is capable of inferring the capacity bandwidth along an arbitrary set of path segments in the network, or across the portion of a path shared by a set of connections, and have presented results of simulations and preliminary Internet measurements of our techniques. The constructions we advocate are built in part upon packet-pair techniques, and the inferences we draw are accurate under a variety of simulated network conditions and are robust to network effects such as the presence of bursty cross-traffic. While the end-to-end probing constructions we proposed in this paper are geared towards a specific problem, we believe that there will be increasing interest in techniques which conduct remote probes of network-internal characteristics, including those across arbitrary subpaths or regions of the network. We anticipate that lightweight mechanisms to facilitate measurement of metrics of interest, such as capacity bandwidth, will see increasing use as emerging network-aware applications optimize their performance via intelligent utilization of network resources.

Page 41: Measuring Capacity Bandwidth of Targeted

REFERENCES:

[1] B. Ahlgren, M. Bjorkman, and B. Melander, “Network probing using packet trains,” Swedish Inst., Technical Report, Mar. 1999.

[2] D. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, “Resilientoverlay networks,” in SOSP 2001, Banff, Canada, Oct. 2001.

[3] S. Banerjee and A. Agrawala, “Estimating available capacity of a networkconnection,” in IEEE Int. Conf. Networks (ICON 01), Bangkok, Thailand, Oct. 2001.

[4] J. C. Bolot, “End-to-end packet delay and loss behavior in the Internet,” in Proc. ACM SIGCOMM’93, Sep. 1993, pp. 289–298.

[5] J. Byers, J. Considine, M. Mitzenmacher, and S. Rost, “Informed content delivery across adaptive overlay networks,” in ACM SIGCOMM’02, Pittsburgh, PA, Aug. 2002.

[6] J. Byers, M. Luby, and M. Mitzenmacher, “Accessing multiple mirror sites in parallel: Using Tornado codes to speed up downloads,” in Proc.IEEE INFOCOM’99, Mar. 1999, pp. 275–83.

[7] R. L. Carter and M. E. Crovella, “Measuring bottleneck link speed in packet switched networks,” Performance Evaluation, vol. 27&28, pp.297–318, 1996.

Page 42: Measuring Capacity Bandwidth of Targeted

[8] Y.-H. Chu, S. Rao, and H. Zhang, “A case for end-system multicast,”in ACM SIGMETRICS’00, Santa Clara, CA, Jun. 2000.

[9] M. E. Crovella, R. Frangioso, and M. Harchol-Balter, “Connection scheduling in web servers,” in Proc. 1999 USENIX Symp. Internet Technologies and Systems (USITS’99), Oct. 1999.

[10] C. Dovrolis, P. Ramanathan, and D. Moore, “What do packet dispersion techniques measure?,” in INFOCOM’01, Anchorage, AK, Apr. 2001.

[11] A. Downey, “Using pathchar to estimate internet link characteristics,” in ACM SIGCOMM’99, Boston, MA, Aug. 1999.

[12] N. Duffield, F. L. Presti, V. Paxson, and D. Towsley, “Inferring link lossusing striped unicast probes,” in IEEE INFOCOM 2001, Apr. 2001.

[13] NLANR Network Traffic Traces. NLANR: National Lab. for Applied Network Research, 2003 [Online]. Available: http://www.caida.org/publications/papers/2003/nlanr/nlanr\-overview.pdf

[14] M. Goyal, R. Guerin, and R. Rajan, “Predicting TCP throughput fromnon-invasive network sampling,” in IEEE INFOCOM’02, Jun. 2002.

[15] K. Hanna, N. Natarajan, and B. Levine, “Evaluation of a novel two-stepserver selection metric,” in 9th Int. Conf. Network Protocols (ICNP),Riverside, CA, Nov. 2001.

[16] K. Harfoush, “A framework and toolkit for the effective measurement and representation of internet internal characteristics,” Ph.D. dissertation,Boston Univ., Boston, MA, Jun. 2002.

[17] K. Harfoush, A. Bestavros, and J. Byers, “Robust identification of shared losses using end-to-end unicast probes,” in 8th Int. Conf.Network Protocols (ICNP), Nov. 2000.

[18] N. Hu, L. Li, and P. Steenkiste, “Evaluation and characterization of available bandwidth probing techniques,” IEEE J. Sel. Areas Commun.,vol. 21, no. 8, pp. 879–974, Aug. 2004.

[19] N. Hu, L. Li, and P. Steenkiste, “Locating internet bottlenecks: Algorithms,

Page 43: Measuring Capacity Bandwidth of Targeted

measurements, and implications,” in ACM SIGCOMM 2004, Portland, OR, Sep. 2004.

[20] V. Jacobson, Pathchar: A tool to infer characteristics of Internet paths. [Online]. Available: ftp://ftp.ee.lbl.gov/pathchar

[21] V. Jacobson, Traceroute. 1989 [Online]. Available: ftp://ftp.ee.lbl.gov/traceroute.tar.Z

[22] M. Jain and C. Dovrolis, “End-to-end available bandwidth: Measurementmethodology, dynamics, and relation with TCP throughput,” in ACM SIGCOMM’02, Aug. 2002.

[23] M. Jain and C. Dovrolis, “Pathload: A measurement tool for end-to-endavailable bandwidth,” in Passive and Active Measurement (PAM)Workshop2002, Fort Collins, CO, Mar. 2002.

[24] J. Jannotti, D. Gifford, K. Johnson, M. F. Kaashoek, and J. O’Toole, Jr., “Overcast: Reliable multicasting with an overlay network,” in Proc.OSDI 2000, San Diego, CA, Oct. 2000.

[25] J. Kangasharju, J. Roberts, and K. W. Ross, “Object replication strategiesin content distribution networks,” in Proc. WCW’01:Web Caching and Content Distribution Workshop, Boston, MA, Jun. 2001.

[26] R. Kapoor, L. Chen, L. Lao, M. Gerla, and M. Sanadidi, “CapProbe: A simple and accurate capacity estimation technique,” in ACM SIGCOMM 2004, Portland, OR, Sep. 2004.

[27] S.Keshav, “A control-theoretic approach to flowcontrol,” in ACMSIGCOMM’91, Sep. 1991.

[28] S. Keshav, “Congestion control in computer networks,” Ph.D. dissertation,University of California, Berkeley, CA, Sep. 1991.

[29] K. Lai and M. Baker, “Measuring link bandwidths using a deterministicmodel of packet delay,” in ACM SIGCOMM’00, Stockholm, Aug. 2000.

[30] K. Lai and M. Baker, “Nettimer: A tool for measuring bottleneck link bandwidth,” in Proc. USITS’01, Mar. 2001.

Page 44: Measuring Capacity Bandwidth of Targeted

[31] B. Mah, pchar. 2000 [Online]. Available: http://www.ca.sandia.gov/bmah/Software/pchar

[32] The Abilene Network Logical Map. Jan. 30, 2002 [Online]. Available: http://www.abilene.iu.edu/images/logical.pdf

[33] B. Melander, M. Bjorkman, and P. Gunningberg, “A new end-to-end probing and analysis method for estimating bandwidth bottlenecks,” in IEEE GLOBECOM 2000, San Francisco, CA, Nov. 2000.

[34] ns: Network Simulator [Online]. Available: http://www-mash.cs. berkeley.edu/ns/ns.html

[35] A. Pasztor and D. Veitch, “Attila pasztor and darryl veitch, active probing using packet quartets,” in ACM SIGCOMM Internet Measurement Workshop 2002.

[36] A. Pasztor and D. Veitch, “The packet size dependence of packet pair like methods,” in Tenth International Workshop on Quality of Service (IWQoS), 2002.

[37] V. Paxson, “End-to-end routing behavior in the Internet,” in ACM IGCOMM’96, Stanford, CA, Aug. 1996.

[38] V. Paxson, “End-to-end Internet packet dynamics,” in ACM SIGCOMM’ 97.

[39] V. Paxson, “Measurements and analysis of end-to-end internet dynamics,” Ph.D. thesis, Comput. Sci. Dept., University of California, Berkeley, CA, 1997.

[40] P. Radoslavov, R. Govindan, and D. Estrin, “Topology-informed internetreplica placement,” in WCW’01: Web Caching and Content Distribution Workshop, Boston, MA, Jun. 2001.

[41] V. Ribeiro, R. Riedi, R. Baraniuk, J. Navratil, and L. Cottrell, “PathChirp: Efficient available bandwidth estimation for network paths,” in Passive and Active Measurement (PAM) Workshop 2003, La Jolla, CA, Apr. 2003.

[42] P. Rodriguez, A. Kirpal, and E. Biersack, “Parallel-access for mirror sites in the internet,” in IEEE INFOCOM’00, Mar. 2000.

Page 45: Measuring Capacity Bandwidth of Targeted

[43] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and H. Balakrishnan, “Chord: A scalable peer-to-peer lookup service for internet applications,”in ACM SIGCOMM’01, San Diego, CA, Aug. 2001.

[44] J. Strauss, D. Katabi, and F. Kaashoek, “A measurement study of available bandwidth estimation tools,” in ACM Internet Measurement Conf. (IMC) Tools 2003, Miami, FL, Oct. 2003.

[45] Z. ZiXuan, B. Lee, C. Fu, and J. Song, “Packet triplet:Anovel approach to estimate path capacity,” IEEE Commun. Lett., vol. 9, no. 12, 2005.


Recommended