Y9MC03031
INTRODUCTION
Page | 1
Y9MC03031
Availability of services in a networked system is a security concern that has received
enormous attention in recent years. Most researches in this area are on designing and verifying
defense mechanisms against denial-of-service (DoS) attacks. A DoS attack is characterized by a
malicious behavior, which prevents the legitimate users of a network service from using that
service. There are two principal classes of these attacks: flooding attacks and logic attacks.
A flooding attack such as SYN flood, Smurf or TFN2K sends an overwhelming number of
requests for a service offered by the victim. These requests deplete some key resources at the
victim so that the legitimate users’ requests for the same are denied. A resource may be the
capacity of a buffer, CPU time to process requests, the available bandwidth of a communication
channel, etc. The resources exhausted by a flooding attack revive when the attack flood stops. A
logic attack such as Ping-of-Death or Teardrop forges a fatal message accepted and processed
by the victim’s vulnerable software and leads to resource exhaustion at the victim. Unlike
flooding attacks, the effects of a logic attack remain after the attack until some appropriate
remedial actions are adopted. A logic attack can be thwarted by examining the contents of
messages received and discarding the unhealthy ones. This is due to the fact that an attack
message differs from a legitimate one in contents. In flooding attacks, on the contrary, such a
distinction is not possible. This causes defense against flooding attacks to be an arduous task.
This paper will focus solely on flooding attacks.
A large number of defenses have been devised against flooding attacks. According to [9], a
defense mechanism may be a reactive or preventive one. A reactive mechanism such as
pushback , traceback or filtering endeavors to alleviate the impact of a flooding attack on the
victim by detecting the attack and responding to it. A preventive mechanism, on the other hand,
enables the victim to tolerate the attack without denying the service to legitimate users. This is
usually done by enforcing restrictive policies for resource consumption. A method for limiting
resource consumption\ is the use of client puzzles.
In general, reactive mechanisms suffer from the scalability problem and difficulty of attack
traffic identification. This is not the case in the client-puzzle approach, where the defender treats
incoming requests similarly and need not differentiate between the attack and legitimate
Page | 2
Y9MC03031
requests. Upon receiving a request, the defender produces a puzzle and sends it to the requester.
If it is answered by a correct solution, the corresponding resources are then allocated. As solving
a puzzle is resource consuming, the attacker who intends to use up the defender’s resources by
his repeated requests is deterred from perpetrating the attack.
Nonetheless, an attacker who knows the defender’s possible actions and their corresponding
costs may rationally adopt his own actions to defeat a puzzle-based defense mechanism. For
example, if the defender produces difficult puzzles, the attacker responds them at random and
with incorrect solutions. In this way, he may be able to exhaust the defender’s resources engaged
in solution verification. If the defender produces simple puzzles, the mechanism is not effective
in the sense that the attacker solves the puzzles and performs an intense attack. Moreover, even if
the defender enjoys efficient low-cost techniques for producing puzzles and verifying solutions,
he should deploy the effective puzzles of minimum difficulty levels, i.e., the optimum puzzles, to
provide the maximum quality of service for the legitimate users. Hence, the difficulty level of
puzzles should be accurately adjusted in a timely manner to preserve the effectiveness and
optimality of the mechanism some mechanisms have attempted to adjust the difficulty level of
puzzles according to the victim’s load, they are not based on a suitable formalism incorporating
the above trade-offs and, therefore, the effectiveness and optimality of those mechanisms have
remained unresolved.
The above issues indicate that a puzzle-based defense mechanism involves antagonistic elements
and, therefore, it can be effectively studied using game theory. In this paper, it is shown that the
interactions between the attacker who perpetrates a flooding attack, and the defender who
counters the attack using a puzzle-based defense mechanism can be modeled as a two-player
infinitely repeated game with discounting. The solution concept of perfect Nash equilibrium is
then applied to the game. This leads to the description of players’ optimal strategies.
This paper uses the concept of Nash equilibrium not only in a descriptive way but also a
prescriptive one. In doing so, the difficulty level of puzzles, random number generators, and the
other parameters of a puzzle-based defense are so adjusted that the attacker’s optimum strategy,
prescribed by the Nash equilibrium, does not lead to the exhaustion of defender’s resources. If
Page | 3
Y9MC03031
the defender takes his part in the Nash equilibrium prescription as his defense against flooding
attacks, the best thing for the attacker to do is to be in conformity with the prescription as well.
In this way, the defense mechanism is effective against the attack and provides the maximum
possible payoff for the defender. In other words, the defense mechanism is optimal. This notion
is applied to a series of increasingly sophisticated flooding attack scenarios, culminating in a
strategy for handling a distributed attack from an unknown number of sources. It is worth noting
that two-player game models can also be used in the case of distributed attacks, where the
attackers are modeled as a single player with the capabilities of the attack coalition.
In recent years, some researches have utilized gamebased approaches to describe but not to
design puzzle-based defense mechanisms. Bencsath et al. have used the model of single-shot
strategic-form game to find the defender’s equilibrium strategy . As proposed in the relevant
literature, the defender, at a time, may choose his action according to what he knows about the
attacker’s previous actions. These history-dependent strategies cannot be modeled by a single-
shot strategic-form game. The problem of finding optimum puzzles is not either addressed in
their research.
In [30], Mahimkar and Schmatikov have used the logic ATL and the computation model ATS to
analyze the puzzlebased mechanisms proposed in and In ATL and ATS, the notions of players,
strategies, and actions are added to traditional branching time temporal logic in such a way that
the requirements for a general open system can be specified and verified. Such an approach is
useful in verifying the desired properties of a given mechanism, but it can barely\ be used in
designing. By ATL and ATS, it can be checked that whether the defender has a winning strategy
to the game or not, while the optimality of a given strategy cannot be decided.
The current paper employs the solution concepts of infinitely repeated games to find the
defender’s optimum history-dependent strategies against rational attackers. In this way, it
resolves the deficiencies stated above.
This paper goes on as follows: Section 2 provides a model of networked systems. Section 3
identifies the games well suited to the modeling of possible interactions between a defender and
an attacker in a flooding attack-defense scenario. Section 4 describes the game of the client-
Page | 4
Y9MC03031
puzzle approach in details. Section 5 explains the technique of designing puzzle-based defense
mechanisms using game theoretic solution concepts. Section 6 discusses the defense mechanisms
proposed in this paper and compares them with the earlier puzzle-based defenses. It also outlines
future researches in the game-theoretic study of the client-puzzle approach. Section 7 concludes
the paper. The two appendices give the proofs of the theorems stated in this paper.
Abstract:
In recent years, a number of puzzle-based defense mechanisms have been proposed against
flooding denial-of-service (DoS) attacks in networks. DoS attacks are aimed at the loss of or the
reduction in availability, which is one of the most important general security requirements in
computer networks. Nonetheless, these mechanisms have not been designed through formal
approaches and thereby some important design issues such as effectiveness and optimality have
remained unresolved. A promising approach proposed to alleviate the problem of DoS attacks is
to use client puzzles. This paper utilizes game theory to propose a series of optimal puzzle-based
strategies for handling increasingly sophisticated flooding attack scenarios. In doing so, the
solution concept of Nash equilibrium is used in a prescriptive way, where the defender takes his
part in the solution as an optimum defense against rational attackers. In the defense mechanisms
proposed in this section, the defender adopts the Nash equilibrium prescription that brings him
the maximum possible repeated game payoff while preventing the attack. In this way, the
defense mechanism would be optimal. This study culminates in a strategy for handling
distributed attacks from an unknown number of sources.
Denial-of-service attack (DoS attack):
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is
an attempt to make a computer resource unavailable to its intended users. Although the means to
carry out, motives for, and targets of a DoS attack may vary, it generally consists of the
concerted efforts of a person or people to prevent an Internet site or service from functioning
efficiently or at all, temporarily or indefinitely.
Page | 5
Y9MC03031
ORGANIZATION PROFILE
Page | 6
Y9MC03031
We SEEBACK Software Systems, Today enterprises globally are looking for service
providers who can bring value to the relationship in terms of innovation, creativity, committed to
deliver quality within schedules, budget and having business models supporting the fast change
in global economic scenarios. To achieve this, the service providers should be passionate about
their own business, highly creative, customer centric and innovative to create value to its
customers, employees and shareholders. Seeback Software Systems offers all these and many
more.
Seeback Software Systems, is a leading Software Solutions and Services Provider in the Global
Market, providing Business Solutions and High-End Technology based services to its customer
base in USA, Europe, Nordic and Asia with on-site, off-site and off-shore development models.
With a corporate history of more than 8 years, Seeback Software Systems delivered many large-
scale enterprise class solutions in the areas of E-Business, Knowledge Management, Business
Intelligence, etc., using cutting edge technologies and re-usable frameworks.
Seeback Software Systems team consists of professionals with proven expertise and skills in
building Enterprise Level Architectures using cutting edge technologies like J2EE, CORBA and
Microsoft .NET. Seeback Software Systems has perfected the art of Global Delivery with 24x7
Virtual Development Life Cycle having teams working at on-site, of-site and off-shore
development in different time zones in multiple continents. Seeback Software Systems team
works at high productivity levels by leveraging its expertise of component development
methodologies and in-house built Component Knowledge Warehouse (CKW) for various re-
usable functionalities
To become globally recognizable and respectable IT Solution Provider by delivering quality
Software Solutions, Services and Products to enhance the business value of IT to our global
customers.
We have combined the following to achieve our Corporate Vision:
To continuously achieve high levels of Customer Satisfaction
Page | 7
Y9MC03031
To create an environment where every member of Seeback Software Systems strives
towards success through Innovation, Creativity and Knowledge Driven Practices.
To create Stock Holder Value through a continuous, predictable overall growth by de-
risking the business models.
To strive for excellence in every facet of Organization by delivering quality through
established processes and methodologies.
To continuously build expertise in cutting edge technologies and build tools and systems
to enhance the productivity of the team.
Seeback Software Systems offers a complete range of innovative integrated e-business solutions
designed to meet the specific needs of industries worldwide. Our competence lies across building
customized solutions to implementing industry standard packages. Seeback Software Systems
has domain experts who work closely with technology team to deliver value added solutions.
Our energies are focused mainly on the following business domains are
HealthCare
Retail & Distribution
FBIS
Utilities
Page | 8
Y9MC03031
ABOUT THE SYSTEM
Page | 9
Y9MC03031
Existing System:
In Existing system, a number of puzzle-based defense mechanisms have been proposed
against flooding denial-of-service (DoS) attacks in networks.
These mechanisms have not been designed through formal approaches and thereby some
important design issues such as effectiveness and optimality have remained unresolved.
A reactive mechanisms endeavor to alleviate the impact of a flooding attack on the victim
by detecting the attack and responding to it.
Disadvantages Of Existing System:
Effectiveness and optimality of those mechanisms have remained unresolved.
Reactive mechanisms suffer from the scalability problem and difficulty of attack traffic
identification.
Existing system is not secure.
Flooding attacks can cause failure in computer system.
Page | 10
Y9MC03031
Proposed System:
This paper utilizes game theory to propose a number of puzzle-based defenses against
flooding attacks.
There are, four defense mechanisms are proposed based on Nash equilibrium.
PDM1 is derived from the open-loop solution concept in which the defender chooses his
actions regardless of what happened in the game history. This mechanism is applicable in
defeating the single-source and distributed attacks, but it cannot support the higher
payoffs being feasible in the game.
PDM2 resolves this by using the closed-loop solution concepts, but it can only defeat a
single-source attack.
PDM3 extends PDM2 and deals with distributed attacks. This defense is based on the
assumption that the defender knows the size of the attack coalition.
Finally, in PDM4, the ultimate defense mechanism is proposed in which the size of the
attack coalition is assumed unknown.
The mechanisms proposed in the current paper are based on a number of assumptions. For
example, it is assumed that N(pp + vp) ≤ 1 In other words, the defender should be able to
produce the puzzles and verify the solutions in an efficient way.
Advantages:
Proposed system handling increasingly sophisticated flooding attack than in existing.
Effectiveness and optimality of those mechanisms have remained resolved.
The estimations made by a reactive mechanism can be used in tuning the mechanisms
proposed in this paper.
Increased in scalability and security.
We have proposed a distributed framework called DefCOM that existing defense nodes
can join to achieve a cooperative response to DDoS attacks
Page | 11
Y9MC03031
In the defense mechanisms proposed in this defender takes the role of the public
randomizing device.
SYSTEM ANALYSIS
Page | 12
Y9MC03031
Requirements Analysis is done in order to understand the problem for which the software
system is to solve. For example, the problem could be automating an existing manual process, or
developing a completely new automated system, or a combination of the two. For large systems
which have a large number of features, and that need to perform many different tasks,
understanding the requirements of the system is a major task. The emphasis in requirements
Analysis is on identifying what is needed from the system and not how the system will achieve it
goals. This task is complicated by the fact that there are often at least two parties involved in
software development - a client and a developer. The developer usually does not understand the
client's problem domain, and the client often does not understand the issues involved in software
systems. This causes a communication gap, which has to be adequately bridged during
requirements Analysis.
In most software projects, the requirement phase ends with a document describing all the
requirements. In other words, the goal of the requirement specification phase is to produce the
software requirement specification document. The person responsible for the requirement
analysis is often called the analyst. There are two major activities in this phase - problem
understanding or analysis and requirement specification in problem analysis; the analyst has to
understand the problem and its context. Such analysis typically requires a thorough
understanding of the existing system, the parts of which must be automated.
Once the problem is analyzed and the essentials understood, the requirements must be specified
in the requirement specification document. For requirement specification in the form of
document, some specification language has to be selected (example: English, regular
expressions, tables, or a combination of these). The requirements documents must specify all
functional and performance requirements, the formats of inputs, outputs and any required
standards, and all design constraints that exits due to political, economic environmental, and
security reasons. The phase ends with validation of requirements specified in the document. The
basic purpose of validation is to make sure that the requirements specified in the document,
actually reflect the actual requirements or needs, and that all requirements are specified.
Page | 13
Y9MC03031
Validation is often done through requirement review, in which a group of people including
representatives of the client, critically review the requirements specification.
Software Requirement or Role of Software Requirement Specification (SRS)
IEEE (Institute of Electrical and Electronics Engineering) defines as,
i. A condition of capability needed by a user to solve a problem or achieve an objective;
ii. A condition or capability that must be met or possessed by a system to satisfy a contract,
standard, specification, or other formally imposed document.
Note that in software requirements we are dealing with the requirements of the proposed system,
that is, the capabilities that system, which is yet to be developed, should have. It is because we
are dealing with specifying a system that does not exist in any form that the problem of
requirements becomes complicated. Regardless of how the requirements phase proceeds, the
Software Requirement Specification (SRS) is a document that completely describes what the
proposed software should do without describing how the system will do it?. The basic goal of the
requirement phase is to produce the Software Requirement Specification (SRS), which describes
the complete external behavior of the proposed software.
Identification Of Need (Problem Definition):
A network consists of two principal classes of elements, active entities and resources. An active
entity is an abstraction of a program, process, or set of processes acting on behalf of human
beings. In this sense, an active entity is capable of choosing different actions at different
times .The interpretation of a resource maybe the capacity of a temporary buffer or a long-term
memory, CPU time to process requests, the throughput of data bus in clients’ hardware facilities,
the bandwidth of a communication channel, and so forth. When an active entity performs an
action, it engages a number of resources.
Each resource is associated with a variable of a certain domain, which represents the current
available amount of that resource. At a time, the state of the network is characterized by the
values taken by resource variables. At a state, each active entity has a nonempty finite set of Page | 14
Y9MC03031
actions available to him, his action space at that state, from which he can choose one action. If
there are n active entities, the chosen actions can be described by ann-tuple, called an action
profile. When an action profile is picked, a state transition takes place. An execution of a
networked system begins with an initial state and proceeds by traversing a number of states
according to the action profiles chosen at those states. In the case of flooding attacks, the set of
active entities comprises legitimate users, attackers, and the defender.
A defense mechanism against flooding attacks should enforce those executions in which the
legitimate users’ requests are successfully served. The formal model of a network is given below.
Our Work:
Preventive mechanisms against flooding attacks can be effectively studied through game theory.
This is mainly owing to the several trade-offs existing in a flooding attack defense scenario. For
an attacker, there is a trade-off between the severity of his attack and the amount of resources he
uses to do so; the more damage an attacker intends to cause, the more amounts of resources he
should spend. For a defender, on the other hand, there is a trade-off between the effectiveness of
his defense and the quality of service he provides for legitimate users; the more difficult it
becomes to exhaust the defender’s resources, the more workload, and hence, less quality of
service is imposed on legitimate users.
A trade-off also exists between the effectiveness of the defense and the amounts of resources a
defender expends. Another reason for using game theory in designing flooding prevention
mechanisms is that the underlying assumptions of game theory hold in a network. The main
assumption is that the players are rational, i.e., their planned actions at any situation and point in
time must actually be optimal at that time and in that situation given their beliefs. This
assumption holds in a network, where players are the active entities created and controlled by
human beings.
Therefore, a defense mechanism that implements the defender’s strategy obtained from a game-
theoretic approach assures its designer that the best possible sequence of actions is performed
against a rational attacker. This would be the case if all the factors affecting the desirability of an
Page | 15
Y9MC03031
action profile for a player were reflected by his payoff function. In what follows, some
fundamental concepts of game theory such as history and strategy are defined in terms of the
network model in Definition 1. The game model of a flooding attack-defense scenario is then
extracted.
Preliminary Investigation:
In this article, we seek to address a simple question: “How prevalent are denial-of-service attacks
in the Internet?” Our motivation is to quantitatively understand the nature of the current threat as
well as to enable longer-term analyses of trends and recurring patterns of attacks. We present a
new technique, called “backscatter analysis,” that provides a conservative estimate
of worldwide denial-of-service activity. We use this approach on 22 traces (each covering a week
or more) gathered over three years from 2001 through 2004. Across this corpus we quantitatively
assess the number, duration, and focus of attacks, and qualitatively characterize their behavior. In
total, we observed over 68,000 attacks directed at over 34,000 distinct victim IP addresses---
ranging from well-known e-commerce companies such as Amazon and Hotmail to small foreign
ISPs and dial-up connections. We believe our technique is the first to provide quantitative
estimates of Internet-wide denial-of-service activity and that this article describes the most
comprehensive public measurements of such activity to date.
Launching a denial of service (DoS) attack is trivial, but detection and response is a
painfully slow and often a manual process. Automatic classification of attacks as single- or
multi-source can help focus a response, but current packet-header-based approaches are
susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on
header content, and novel techniques such as transient ramp-up behavior and spectral analysis.
Although headers are easily forged, we show that characteristics of attack ramp-up and attack
spectrum are more difficult to spoof. To evaluate our framework we monitored access links of a
regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67
attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis.
We validate our results through monitoring at a second site, controlled experiments, and
simulation. We use experiments and simulation to understand the underlying reasons for the
Page | 16
Y9MC03031
characteristics observed. In addition to helping understand attack dynamics, classification
mechanisms such as ours are important for the development of realistic models of DoS traffic,
can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to
estimate the level of DoS activity on the Internet.
This paper analyzes a network-based denial of service attack for IP (Internet Protocol) based
networks. It is popularly called SYN flooding. It works by an attacker sending many TCP
(Transmission Control Protocol) connection requests with spoofed source addresses to a victim's
machine. Each request causes the targeted host to instantiate data structures out of a limited pool
of resources. Once the target host's resources are exhausted, no more incoming TCP connections
can be established, thus denying further legitimate access.The paper contributes a detailed
analysis of the SYN flooding attack and a discussion of existing and proposed countermeasures.
Furthermore, we introduce a new solution approach, explain its design, and evaluate its
performance. Our approach offers protection against SYN flooding for all hosts connected to the
same local area network, independent of their operating system or networking stack
implementation. It is highly portable, configurable, extensible, and requires neither special
hardware, nor modifications in routers or protected end systems.
Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of
both the attacks and the defense approaches is overwhelming. This paper presents two
taxonomies for classifying attacks and defenses, and thus provides researchers with a better
understanding of the problem and the current solution space. The attack classification criteria
was selected to highlight commonalities and important features of attack strategies, that define
challenges and dictate the design of countermeasures. The defense taxonomy classifies the body
of existing DDoS defenses based on their design decisions; it then shows how these decisions
dictate the advantages and deficiencies of proposed solutions.
Distributed denial of service (DDoS) attacks continue to plague the Internet. Defense against
these attacks is complicated by spoofed source IP addresses, which make it difficult to determine
a packet's true origin. We propose Pi (short for path identifier), a new packet marking approach
in which a path fingerprint is embedded in each packet, enabling a victim to identify packets
Page | 17
Y9MC03031
traversing the same paths through the Internet on a per packet basis, regardless of source IP
address spoofing. Pi features many unique properties. It is a per-packet deterministic mechanism:
each packet traveling along the same path carries the same identifier This allows the victim to
take a proactive role in defending against a DDoS attack by using the Pi mark to filter out
packets matching the attackers' identifiers on a per packet basis. The Pi scheme performs well
under large-scale DDoS attacks consisting of thousands of attackers, and is effective even when
only half the routers in the Internet participate in packet marking. Pi marking and filtering are
both extremely lightweight and require negligible state. We use traceroute maps of real Internet
topologies (e.g. CAIDA's Skitter (2000) and Burch and Cheswick's Internet Map (1999, 2002))
to simulate DDoS attacks and validate our design.
Modules Description:
1. THE CLIENT-PUZZLE APPROACH
2. Open-Loop Solutions
2.1 PDM1 (History).
3. Closed-Loop Solutions
3.1 PDM2 (single-source attack.).
4. Considerations for Distributed Attacks
4.1 PDM3 (known coalition size).
4.2 PDM4 (unknown coalition size)
Page | 18
Y9MC03031
FEASIBILITY STUDY
Page | 19
Y9MC03031
The feasibility of the project is analyzed in this phase and business proposal is put forth with a
very general plan for the project and some cost estimates. During system analysis the feasibility
study of the proposed system is to be carried out. This is to ensure that the proposed system is
not a burden to the company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
Economical Feasibility
Technical Feasibility
Social Feasibility
Economical Feasibility
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
Technical Feasibility
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead to
high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
Operational Feasibility
Page | 20
Y9MC03031
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make him
familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
Function Requirements
Functional requirements specify which outputs should be produced from the given inputs.
They describe the relationship between the input and output of the system, for each
functional requirement a detailed description of all data inputs and their source and the
range of valid inputs must be specified.
All the operations to be performed on the input data to obtain the output should be
specified.
The system is used by the four persons namely Administrator, Project Manager,
Developer and the customer. Each person is having their own roles and are separated by
the security issues.
The system is said to be reliable because the entire system was built using java which is
most robust language. Reliability refers to the standards of the system.
System is highly functional and good in performance. The system must use the minimal
set of variables and minimal usage of the control structures will dynamically increase the
performance of the system.
The system is supportable with different platforms and a wide range of machines. the
java code used in this project is more flexible and having a feature of platform
independence. And also added support for wide range of mobile phone which supports
CLDC platform.
The system would be implemented in a networked and mobile based WAP environment.
Page | 21
Y9MC03031
The entire system was packaged into single package.
The legal issues of this project are unknown as that rights are not applicable for the
project done for the academics. All the legal rights are sol proprietor of the organization.
SOFTWARE ENGINEERING PARADIGM APPLIED
Page | 22
Y9MC03031
Design Specification
Design of software involves conceiving planning out and specifying the externally observable
characteristics of the software product. We have data design, architectural design and user
interface design in the design process. These are explained in the following section. The goals of
design process it to provide a blue print for implementation, testing, and maintenance activities.
SDLC Methdologies
This document play a vital role in the development of life cycle (SDLC) as it describes the
complete requirement of the system. It means for use by developers and will be the basic during
testing phase. Any changes made to the requirements in the future will have to go through
formal change approval process.
Spiral Model
It was defined by Barry Boehm in his 1988 article, “A spiral Model of Software Development
and Enhancement. This model was not the first model to discuss iterative development, but it
was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase
starts with a design goal and ends with a client reviewing the progress thus far. Analysis and
engineering efforts are applied at each phase of the project, with an eye toward the end goal of
the project.
The steps for Spiral Model can be generalized as follows:
Page | 23
Y9MC03031
The new system requirements are defined in as much details as possible. This usually involves
interviewing a number of users representing all the external or internal users and other aspects of
the existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This is
usually a scaled-down system, and represents an approximation of the characteristics of
the final product.
A second prototype is evolved by a fourfold procedure:
i. Evaluating the first prototype in terms of its strengths, weakness, and risks.
ii. Defining the requirements of the second prototype.
iii. Planning an designing the second prototype.
iv. Constructing and testing the second prototype.
At the customer option, the entire project can be aborted if the risk is deemed too great.
Risk factors might involve development cost overruns, operating-cost miscalculation, or
any other factor that could, in the customer’s judgment, result in a less-than-satisfactory
final product.
The existing prototype is evaluated in the same manner as was the previous prototype,
and if necessary, another prototype is developed from it according to the fourfold
procedure outlined above.
The preceding steps are iterated until the customer is satisfied that the refined prototype
represents the final product desired.
The final system is constructed, based on the refined prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is carried on a
continuing basis to prevent large scale failures and to minimize down time.
Page | 24
Y9MC03031
The following diagram shows how a spiral model acts like:
Page | 25
Y9MC03031
Fig 1.0-Spiral Model
Advantages:
Estimates(i.e. budget, schedule etc .) become more relistic as work progresses, because
important issues discoved earlier.
It is more able to cope with the changes that are software development generally entails.
Page | 26
Y9MC03031
Spiral Model Description
The development spiral consists of four quadrants as shown in the figure above
Quadrant 1: Determine objectives, alternatives, and constraints.
Quadrant 2: Evaluate alternatives, identify, resolve risks.
Quadrant 3: Develop, verify, next-level product.
Quadrant 4: Plan next phases.
Although the spiral, as depicted, is oriented toward software development, the concept is equally
applicable to systems, hardware, and training, for example. To better understand the scope of
each spiral development quadrant, let’s briefly address each one.
Quadrant 1: Determine Objectives, Alternatives, and Constraints
Activities performed in this quadrant include:
Establish an understanding of the system or product objectives—namely performance,
functionality, and ability to accommodate change.
Investigate implementation alternatives—namely design, reuse, procure, and procure/ modify.
Investigate constraints imposed on the alternatives—namely technology, cost, schedule, support,
and risk. Once the system or product’s objectives, alternatives, and constraints are understood,
Quadrant 2 (Evaluate alternatives, identify, and resolve risks) is performed.
Quadrant 2: Evaluate Alternatives, Identify, Resolve Risks
Engineering activities performed in this quadrant select an alternative approach that best satisfies
technical, technology, cost, schedule, support, and risk constraints. The focus here is on risk
mitigation. Each alternative is investigated and prototyped to reduce the risk associated with the
development decisions. Boehm describes these activities as follows:
Page | 27
Y9MC03031
This may involve prototyping, simulation, benchmarking, reference checking, administering user
questionnaires, analytic modeling, or combinations of these and other risk resolution techniques.
The outcome of the evaluation determines the next course of action. If critical operational and/or
technical issues (COIs/CTIs) such as performance and interoperability (i.e., external and internal)
risks remain, more detailed prototyping may need to be added before progressing to the next
quadrant. Dr. Boehm notes that if the alternative chosen is “operationally useful and robust
enough to serve as a low-risk base for future product evolution, the subsequent risk-driven steps
would be the evolving series of evolutionary prototypes going toward the right (hand side of the
graphic) the option of writing specifications would be addressed but not exercised.” This brings
us to Quadrant 3.
Quadrant 3: Develop, Verify, Next-Level Product
If a determination is made that the previous prototyping efforts have resolved the COIs/CTIs,
activities to develop, verify, next-level product are performed. As a result, the basic “waterfall”
approach may be employed—meaning concept of operations, design, development, integration,
and test of the next system or product iteration. If appropriate, incremental development
approaches may also be applicable.
Quadrant 4: Plan Next Phases
The spiral development model has one characteristic that is common to all models—the need for
advanced technical planning and multidisciplinary reviews at critical staging or control points.
Each cycle of the model culminates with a technical review that assesses the status, progress,
maturity, merits, risk, of development efforts to date; resolves critical operational and/or
technical issues (COIs/CTIs); and reviews plans and identifies COIs/CTIs to be resolved for the
next iteration of the spiral.
Subsequent implementations of the spiral may involve lower level spirals that follow the same
quadrant paths and decision considerations.
Page | 28
Y9MC03031
SYSTEM SPECIFICATION
Hardware Configuration
Hard disk : 40 GB or Above
RAM : 512mb or Above
Processor : Pentium IV or above
Software Configuration
Front End Technology : Swing in Java
Java Version : JDK 1.6.0,JRE 1.6.0
Supporting Data base : Oracle
Operating System : Windows XP. Or Windows 7.
Page | 29
Y9MC03031
MODULES DESCRIPTION
Page | 30
Y9MC03031
THE CLIENT-PUZZLE APPROACH
As stated above, a flooding attack-defense scenario is modeled as a two-player infinitely
repeated game. Therefore, in the stage-game played at any period t, the defender and the
attacker, i.e., the active entities 1, 2 E, choose from their action spaces (1) and (2) and
cause the game to arrive at period t + 1. In the client-puzzle approach, the set of possible actions
for the defender is 1 = {P1; P2; . . . Pn}, and the one for the attacker is 1 = {QT, RA, CA}. The
action Pi, 1 ≤ i ≤ n, stands for issuing a puzzle of difficulty level i. It is assumed that the puzzle of
level i is less difficult than the one of level j if i < j. The actions QT, RA, and CA stand for
quitting the protocol (no answer), answering the puzzle randomly, and answering the puzzle
correctly. It is also assumed that a legitimate user always solves the puzzles and returns correct
answers. At a period, the attacker knows the action chosen by the defender at that period. Thus,
the stage-game is indeed an extensive-form game. In order to convert this game into its
equivalent strategic-form game, it is sufficient to consider the action spaces as (1) = 1 and (2)
= 2n and, where 2
n is the Cartesian product of 2 together itself n times. For example, if the
defender can choose between P1 and P2, one of possible actions for the attacker is (CA; QT),
which means “choose CA when the defender chooses P1, and QT when he chooses P2.” It is
worth noting that a player’s strategy for the repeated game is obtained from the functions S e and
e stated in Definition 4, where a player chooses his action according to the history of events he
knows.
Module 1:
The model of the stage game is completed by the players’ payoff functions. The underlying
notion of a puzzle-based defense is that the workload of the attacker should be higher than of the
defender [21]. In addition, the defender should care about the level of quality of service he
provides for legitimate users. Therefore, an action profile is more preferable for the defender if it
Page | 31
SenderSender ServerServerGame Theory
Game Theory
Y9MC03031
results in more cost to the attacker, less cost to the defender, and less cost to legitimate users.
Similarly, an action profile is more desirable for the attacker if it causes more cost to the
defender and less cost to the attacker. Hence, the players’ stage-game payoffs are obtained from
where is the cost function defined by (2), (u,α); is the cost to a legitimate user when the
action profile a is chosen, and [0,1] is the level of quality of service the defender is willing to
provide for legitimate users. As will be seen, a low quality of service is inevitable when the
attacker enjoys high capabilities.
As stated in Definition 1, the function _ determines the set of resources an active entity owns or
shares with the other entities. In studying a flooding attack-defense scenario, we confine our
attention to those resources engaged in the scenario. According to (1), a resource that is not
engaged in the scenario has no effect on the cost to the players. In the client-puzzle approach, the
defender engages two types of resources, one for producing puzzles and verifying solutions,
denoted by p, and the other for providing the requested service. The latter, denoted by m, is the
main resource the defender wishes to protect against flooding attacks. Therefore, (1) =
{p,m}. Similarly, for the attacker, (2) = {s}. where s is the resource he uses to solve a
puzzle. Finally, for a legitimate user, the active entity u, (u) = {s} in which s is the resource
he engages in solving a puzzle. Thus, by using (2), (3) is reduced to
Page | 32
Y9MC03031
Where ,a = D(a, )/D0 for R and a = 1 x 2n As = 1 x 2
n , an arbitrary action
profile is of the form a = (a1; b1, b2, . . . ; bn), where a1 1 and bi 2 for 1 ≤ i ≤ n. In this
action profile, the attacker will play bi if the defender plays Pi. As bi {QT, RA; CA}, there are
only three classes of action profiles listed in Table 1. The values, are the same for the
elements of each class. In puzzle-based approaches, the main resource is allocated when a
requester returns the correct solution to the issued puzzle. The allocated amount of the main
resource is released after a bounded time, say, T. This is one of the features distinguishing a
flooding attack from a logic one. The reference distance of the main resource may be considered
as the number of requests that can be served by the main resource in a time period of length T.
The reference distance of the resource p can also be defined as T. By such a choice, PPi and VPi
would be the ratios of times taken for producing a puzzle and verifying a puzzle solution of level
i to T. By adopting the same reference distance for s and n, SPi would be the ratio of time the
attacker or a legitimate user spends on solving a puzzle of level i to T. For any i < j, it is assumed
that SPi < SPj, ppi ≤ ppj and vpi ≤ vpj
In the case of distributed attacks, a flooding attack-defense scenario can be modeled as a two-
player game in which the attackers are modeled as a single player with the capabilities of the
attack coalition. More precisely, if there exist n machines in the attack coalition and the cost of
solving a puzzle in a single machine is SP, the costs to the attacker and to a legitimate user in
solving this puzzle are SP/n and SP, respectively. Thus, the payoff functions in (4) can still be
used if SP and are replaced by SP /n and n. Assume 1 = {P1; P2}. In such a case, the stage-
game is represented by the bi matrix shown in Fig. 1. The top element of a cell in this b i matrix is
the defender’s payoff, while the bottom one is the attacker’s. These payoffs are obtained from (4)
using the corresponding values in Table 1. As stated in Section 3, the repeated game between the
defender and the attacker is of discounted payoffs.
Page | 33
Y9MC03031
Therefore, a discount factor (0, 1); is used as a weighting factor in the weighted sum of
payoffs. More precisely, the player i’s payoff for the repeated game, when the mixed strategy
profile = (1, 2) is played, is defined by
It is also more convenient to transform the repeated game payoffs to be on the same scale as the
stage-game payoffs. This is done by multiplying the discounted payoff in (5) by 1 - . Thus, the
player i’s average discounted payoff for the repeated game is
Another considerable issue in a repeated game is the feasibility of stage-game payoffs. As said,
the space of pure action profiles is = 1 x 2n. Therefore, the set g () = 1 x 2
n = g {(g1 (a),
g2 (a))|a } contains all the stage-game playoff vectors supported by pure action profiles. If
Page | 34
Y9MC03031
the defender and the attacker make their mixed stage-game strategies independently, the set of
possible stage-game payoff vectors would be a subset of the convex hull of g (), i.e., a subset of
Where g () = {1, 2,…, m}. On the other hand, if the two players correlate their actions, any
element of con(g ()) can be supported as a payoff vector. In doing so, the players can
condition their actions on the value produced by a public randomizing device.
Indeed, the output of this device is a random variable! of domain [0, 1], and then, an arbitrary
payoff vector mk-1kk is supported by the player i’s mixed strategy:
Defense strategies:
This section employs the solution concepts of infinitely repeated games with discounting to
design the optimum puzzle-based defense strategies against flooding attacks. In general, the
strategies prescribed by such solutions are divided into two categories: history independent (open
loop) and history dependent (closed loop). The defense strategies proposed in this section are
based on the concept of Nash equilibrium. For the ease of reference, this concept is repeated
here.Nash equilibrium for the two-player infinitely repeated game developed in Section 4. Then,
υ1(1*;2* ) ≥ υ1(1;2* ) for any 1 1 , and υ2(1*;2* ) ≥ υ2(1*;2 ) for any 2 2, where
Page | 35
Y9MC03031
υi i’s payoff Function in (5). This means that any unilateral deviation from the strategy profile
stated by the Nash equilibrium has no profit for its deviator.
The concept of Nash equilibrium is often used in a descriptive way, where it describes the
players’ optimum strategies in a game. In this sense, it makes predictions about the behaviors of
rational players. In this section, on the contrary, the concept of Nash equilibrium is employed in
a prescriptive way in which the defender picks out a specific Nash equilibrium and takes his part
in that profile. The attacker may know this, but the best thing for him to do is to be in conformity
with the selected equilibrium. If he chooses another strategy, he gains less profit (the attacker’s
payoff function, defined in (5) and (6), reflects the attacker’s profit from a flooding attack). In
the defense mechanisms proposed in this section, the defender adopts the Nash equilibrium
prescription that brings him the maximum possible repeated game payoff while preventing the
attack. In this way, the defense mechanism would be optimal.
Open-Loop Solutions:
In an open-loop strategy, the action profiles adopted at previous periods are not involved in a
player’s decision at the current period. More formally, in the repeated-game of the client-puzzle
approach In an open-loop strategy, the action profiles adopted at previous periods are not
involved in a player’s decision at the current period. More formally, in the repeated-game client
puzzle approach, it: Hi
t((i)) is an open – loop strategy for player i if t ʑ ≥ 0 hit
Hit[i
t(hit) = i
t(hit)], where i = 1,2 (1) = 1 and (2) = 1
n.
Page | 36
Y9MC03031
Module2:
One of the open-loop solutions to an infinitely repeated game is to play any one of the stage-
game Nash equilibria at a period regardless of what actually happened in the corresponding
history. In other words, let (1;2) be an open-loop strategy profile for the infinitely repeated
game such that it(hi
t) =1t 2
t(h2t)=2
t for all histories h1tH1
t and h2tH2
t . if (1t;2
t) is a stage
game Nash equilibrium for any t, then (1;2) is a sub game – perfect equilibrium for the
repeated game .
In a flooding attack-defense scenario, the defender may not perfectly know the actions taken by
the attacker at previous periods. Thus, adopting an open-loop strategy, as stated above, may be
the simplest way he can attain equilibrium. The following theorem identifies the stage-game
Nash equilibria for the game of the client-puzzle approach.
Page | 37
Select The elect The
Source File
Select The elect The
Source File
Fixed Size of Packet
Fixed Size of Packet
PDM1(Scan all Files from client
PDM1(Scan all Files from client
ServerServer
ClientClient FileDialog
FileDialog
Y9MC03031
In this mechanism, the random number generator and the puzzles are designed as follows:
Step 1. For a given desirability factor of quality service 0 < < 1, choose P1 and P2 in such a
way that
αSP1 < αm < αSP2 ,
αVP < αm - αSP1 ,
αVP1 < αSP2 , and
(αm - αSP1)/(αSP2 - αSP1),
Step 2. Choos α such that α ≤ 1/(N/αm), where N is the number of request can send in the time
the defender require to process using his main resource,
i.e. in T( it is assumed that N (αpp + αvp) ≤ 1.)
Step 3. Create a random number generator that produces a random variable x with pr(x = 0) = α
and pr(x = 0) =1 – α Note that Fig. 2 only shows the core of a puzzle-based defense mechanism,
which chooses optimal difficulty levels. The other components can be the same as the ones in
known mechanisms, e.g.,
There are two other noteworthy issues in PDM1. In the case of distributed attacks, if a single
machine can produce N requests and there exist n machines in the attack coalition, the attacker is
modeled as the one capable of producing nN requests. A designer then runs Steps 1-3 stated
above using αSP/n and n instead of (αSP/n) and . More precisely, for a given desirability factor
of quality of service , the puzzles should satisfy (αSP1/n) < αm< (αSP2/n) and n = (αm -
(αSP1/n)/(αSP2/n - (αSP1/n)). Moreover, if the defender should be able to process 1/ αm legitimate
requests by his main resource, the defense mechanism considers the amount of the main resource
in such a way that the defender can process 2/ αm requests, a half for the defense and the other for
legitimate users. This extra resource may be allocated dynamically when the defender is under
attack, i.e., the number of requests is more than the one assumed for the legitimate users.
Page | 38
Y9MC03031
Closed-Loop Solutions:
In a fair open-loop solution, the defender’s maximum average payoff is -pp -vp -sp2. However,
there are many payoff vectors in the convex hull with greater payoffs for the defender. Thus,
here, a natural question arises: Is there a better fair solution to the game, which results in a
greater payoff to the defender? As proven in , in the games of perfect information, there is a large
subset of the convex hull whose payoff vectors can be supported by perfect Nash equilibrium
provided that suitable closed-loop strategies are adopted. This subset is denoted by V*, and its
elements are called strictly individually rational payoffs (SIRP). In the game of the client-puzzle
approach
V*{(υ1, υ2) con υ (g ())|υ1*, υ2 > υ2*},
Where 1x2 n, and (υ1*, υ2*) is the minmax point defined by
υ1* minα1(n2)(max α1 1 g1) ) (α1;α2)),
υ2* minα1(n1)(max α2 1 g1) ) (α1;α2)),
in which (X) is the set of all probility distributions over X. furthermore, the mixed strategies
resulting in (υ1* and υ2*) are denoted by M1= (M11;M2
1) and M2 = (M12;M2
2), respectively. The
strategy M12 is the player 1’s minmax strategy against the player 2. Similarly, M2
1 is the player
2’s minmax strategy against the palyer1.
Page | 39
Y9MC03031
As seen in Fig. 3, the defender’s maximum average payoff in PDM1, i.e. -pp -vp -sp2 is -0.145,
though many payoffs greater than -0.145 can be supported if the game is of perfect information
and suitable closed-loop strategies are adopted. The following theorem characterizes the set of
payoff vectors that can be supported by perfect Nash equilibrium in an infinitely repeated game
of observable actions and complete information where the payoffs are discounted.
This reflects those attack-defense circumstances in which the player involved in the defense
mechanism knows his opponent’s payoff function as well as the actions chosen by his opponent
at previous periods. It is worth noting that the puzzles can be designed in such a way that the
amounts of resources a machine uses to solve a puzzle are independent of the machine’s
processing power. Therefore, except for flooding attacks from an unknown number of sources, it
is reasonable to assume that the defender knows the attacker’s payoff function.
In the game of the client-puzzle approach, assume the puzzles are of two difficulty levels and satisfy the conditions stated in Theorem 1. Then, (M1
2; M21)(P2;RA) if < (αm - αSP1)/(αSP2 -
αSP1).
Assume the defender receives at most one request before discerning the decision made by the
attacker at the previous period. As P1 is very simple (it can be sending one specific bit to the
defender), this is a reasonable assumption. In addition, each full-length punishment phase in Page | 40
Y9MC03031
Theorem 2 removes the attacker’s profit from a single deviation in Phase 1. Thus, the defender
can adopt the following closed loop strategy. Upon receiving the first request, he produces and
sends P1. When the second request is received, the defender checks if he knows the decision
made by the attacker at the first period. If the defender discerns any deviation in the first period,
he runs Phase 2 in Theorem 2. Otherwise, he issues P1 again. If the defense mechanism is in
Phase 1 and the defender receives the third request, he checks the attacker’s decisions at the
previous two periods. Now, he certainly knows the attacker’s decision at the first period, but he
may not know the attacker’s decision at the second period. If only one deviation is discerned in
the previous two periods, the defender runs Phase 2. If two deviations are discerned, he runs the
action prescribed by Phase 2 twice the number of times stated in Theorem 2. Otherwise, he goes
on by Phase 1. If the defense mechanism is in Phase 2 and a deviation concerning Phase 1 is
discerned, a full-length Phase 2 is considered at the end of the current punishment phase. When
the defense mechanism finishes the punishment phase, it returns to Phase 1, and then repeats the
actions stated above. In this way, the attacker gains nothing by deviating from the said closed-
loop strategy. In other words, this strategy makes an equilibrium. The defense mechanism
derived from the above game-theoretic approach (PDM2) is shown in Fig. 6. Note that the
random number generator and the puzzles used in PDM2 are derived as follows:
Step 1. For a given desirability factor of quality service 0 < < 1, choose P1 and P2 in such a
way that
αSP1 < αm < αSP2 ,
αVP < αm - αSP1 ,
αVP1 < αSP2 , and
(αm - αSP1)/(αSP2 - αSP1),
when N is the numenr of request an attacker can send in the time defender requires
tp process a request using his main resources. The requirement αSP1 < 1/N state that
P1is very simple in such a way the attacker can solve it in a time less than he needs
to produce a request.
Page | 41
Y9MC03031
Step 2. Choose α such a way that equilibrium strategy profile α ο (P1;CA)(1 - α)ο(P1);RA this
necessitates α ≤ 1/(N/αm), where N is the number of request can send in the time the defender
require to process using his main resource,
i.e. in T( it is assumed that N (αpp + αvp) ≤ 1.)
Step 3. Create a random number generator that produces a random variable x with pr(x = 0) = α
and pr(x = 0) =1 – α.
Since a legitimate user always solves the puzzles correctly, his action in Phase 1 may be
considered as a deviation. To avoid this, one can amend PDM2 to ignore a single deviation in a
time period of length Tm and, collectively, 1/m deviations in a time period of length T, where
T is the time the defender requires to process a request using his main resource. The parameter
is so adjusted that the defense mechanism remains in Phase 1 with a great probability when there
is no attack. For example, it can easily be shown that ifm = 0:2 and = 0:5, and the number of
requests produced by legitimate users in a time period of length T is of a Poisson distribution
with parameter 2.5, the defense mechanism remains in Phase 1 with a probability Greater than
0.95 when there is no attack. Such an amendment can be implemented by Check Deviation
blocks in Fig. 6. As stated in PDM1, in a time of length T, the defender should be able to process
double the number of legitimate users’ requests at that period using his main resource. Hence,
this amendment does not menace the fairness of PDM2.Page | 42
File Dialog
File Dialog
Select The Source FileSelect The Source File
Fixed Size Of Packet
Fixed Size Of Packet
PDM2 (Scan One Files from client)PDM2 (Scan One Files from client)
ClientClient
ServerServer
Y9MC03031
Fig. 6. PDM2— The defense mechanism against flooding attacks derived from the closed-loop
solution concept of discounted infinitely repeated games. (a) The defense mechanism, where _ is
the number of punishment periods obtained from Theorem 2, n is the number of remaining
punishment periods, and x is the value of the random variable produced by the defender’s
random number generator and sent to the requester, i.e., the value of public randomizing device.
In addition, 1 is the maximum time the defender waits for the attacker’s response. (b) The
structure of Check Deviation A in (a), where CA (j) and WA(j) stand for the Correct Answer and
Wrong Answer to the puzzle numbered j. A correct answer here is the one prescribed by the
equilibrium strategy. The Check Deviation B has the same structure as in (b) without the loop in
the Check Deviation A.
Page | 43
Y9MC03031
Page | 44
Y9MC03031
In Fig 6.As stated in PDM1, in a time of length T the defender should be able to process double
the number of legitimate user’s requests at that period using his main resource. Hence this
amendment does not menace the fairness of PDM2.
Considerations for Distributed Attacks
PDM1 treats a distributed attack as a single-source attack, where the attackers are modeled as a
single attacker with the capabilities of the corresponding attack coalition. The same approach can
be adopted for closed-loop solutions, but some further issues should be considered there. In a
distributed attack, the requests come from different machines, and it is no longer reasonable to
assume that the defender receives only a small number of requests before receiving the correct or
random answer to an issued puzzle. Indeed, a large number of requests are produced by the
attack coalition, whereas a small proportion of them is of a single machine. Therefore, in the
time a machine is involved in computing the answer, the defender may receive a large number of
requests from the other machines in the coalition. Imitating PDM2, a possible solution to this
problem may be to postpone the transition from the normal to the punishment phase for a time
during which the defender can certainly discern the decisions made by the attacker. This time is
called the reprieve period. In a distributed attack, the defender may receive a large number of
requests during the reprieve period. Thus, if he uses simple puzzles in this period, the attacker
solves them and performs an intense attack. To avoid this, the normal phase strategy profile α ο
(P1;CA)(1 - α)ο(P1;RA) can be replaced by ο (P2;RA)(1 - )ο(P1;RA) (α ο (P1;CA)(1
- α)ο(P1;RA)) in which some difficult puzzles are used in the normal phase. If the defender
should wait for m requests before discerning a possible deviation, i.e., playing his part in the
above strategy profile for m periods, the fairness condition implies that (1 - β)(mm) or ≥ 1-
1/(mm). clearly = 0 if m ≤ 1/m. Note the length of the reprieve period m is obtained from an
increasing function fðnÞ, where n is the size of the attack coalition. Therefore, the following
defense mechanism (PDM3) is proposed against distributed attacks in which it is assumed that
the defender should wait for a duration consisting of at most m requests before discerning a
possible deviation.
Page | 45
Y9MC03031
PDM3 (known coalition size):
Upon receiving a request in Phase 1, the defender runs a random number generator producing the
random variable x with Pr(x = 0) = (1 – α)(1 - β), Pr(x = 1) = α(1 - β), and Pr(x = 2) = β. Then,
he produces the puzzle according to the value of x and sends the puzzle and the value of x to the
requester (the value of x is considered as the output of the public randomizing device). As
inPDM2, the defender considers the action taken by the requester as a deviation if he receives no
response in the maximum waiting time calculated on the basis of the coalition size, or if he
receives a response that is not in conformity with the equilibrium prescription. If k ≥ 1 deviations
are discerned, when the defense mechanism is in Phase 1, it goes to the punishment phase with
the length of k times the length identified in Theorem 2. If it is in the punishment phase and a
deviation of Phase 1 is discerned, a full-length Phase 2 is added to the current punishment phase.
When the defense mechanism finishes the punishment phase, it goes to Phase 2, and then, it is
repeated as above. The final remark is about estimating the size of the attack coalition. As seen, a
difficult puzzle is designed in such a way that αSP2 = (α*SP2/n)>αm, where α*SP2 is the cost of
solving the difficult puzzle in a single machine, and n is the number of machines in the attack
coalition. Therefore, if one assumes a fixed coalition size, say, the largest possible one, he may
unnecessarily choose a very difficult puzzle for the punishment phase that imposes a low quality
of service on legitimate users. Hence, some procedure should be adopted to estimate the size of
the attack coalition. More precisely, in this case, the game would be of incomplete information,
i.e., a Bayesian game, in which a player does not completely know his opponent’s payoff
function, here, the value of sp2=*sp2 /n. In a repeated Bayesian game, a player gradually learns
his opponent’s payoff function through examining the actions taken by his opponent at previous
periods. Although there are some complicated models of infinitely repeated games that identify
the equilibrium strategies in the case of incomplete information, e.g., [34], [35], and [36], the
following approach is adopted here.
The defender has an estimation of the minimum number of requests Nmin that a single machine
can send in the time the defender requires to process a request using his main resource (T). Then,
he makes an estimation of the coalition size n as less than or equal to the number of requests
Page | 46
Y9MC03031
received in the time interval of length T=Nmin. Note that, the difficulty level of the puzzle used
in the punishment phase is obtained from sp2 ≥ nm, and so, if the defender overestimates the
size of the attack coalition, he uses a puzzle more difficult than the one required for the actual
size. Thus, the defense mechanism acts safely for smaller coalitions. Furthermore, the parameter
m is calculated on the basis of n, and thus, is chosen in such a way that the existence of the
reprieve period does not lead to the exhaustion of defender’s resources.
The above argument is not valid if the attacker does not apply his maximum power throughout
the attack. For example, the attack coalition may send a small number of requests to deceive the
defender in his estimation of the coalition size and then benefiting from solving ineffective
difficult puzzles wrongly designed for the punishment phase. To resolve this, the following
approach is adopted in which, through a fair learning process, the defender obtains an effective
estimation of the coalition size that leads to equilibrium in the infinitely repeated game of the
client puzzle approach.
Assume the actual coalition size is n, and the defender presumes r+1 different coalition sizes n0<
n1<. . . .< nr = nL in his learning process, where n0=1 and nL is the size of the largest possible
attack coalition (the defense mechanism is effective when the size of attack coalition is less than
nL) From the above arguments, a defense mechanism is immediately found if the defender knows
the size of the attack coalition, where he finds sp1 , sp2 , the length of Phase 2 in Theorem 2 (г),
m,, and and then follows PDM3 to attain an equilibrium payoff vector (v1; v2). If the defender
does not know the coalition size, he runs PDM4.
PDM4 (unknown coalition size)
Step 1. Put i = 0, j = 0, ѓ = 0, and n = n0. Set the elements in S = (sp1; sp2; г; m;;; np) for
PDM3 on the basis of the coalition size n, where np is the number of remaining punishment
periods that is set to 0 in this step.
Step 2. Run PDM3 according to S. If a bad event e {e1; e2} occurs, save S, and go to Step 3.
The event e1 occurs when the number of received requests shows a coalition size larger than ǹ.
Page | 47
Y9MC03031
The event e2 occurs when a deviation from the action profile prescribed for the punishment
phase is discerned. Note that e1 can occur in both normal and punishment phases, while e2 can
occur only in the punishment phase. If the defense mechanism remains in this step for a long
time, say, T0 = ƖT, and the number of requests received during this period is less than Ɩ/m, go to
Step 1. This resumes the protocol with simple puzzles when the attack terminates. (Note that the
defense mechanism usually employs nonces to guarantee the freshness of received messages.
Therefore, if the attacker saves his solutions to the puzzles and sends them after a long time, the
defender discards them.)
Step 3. If e = e1, find the smallest value Ɩ ≤ i r for which the new estimate of the coalition size
ň satisfies ň < ňi (ň is the number of requests received in the time interval of length T=Ňmin),
and set j = i. Otherwise, set j = j + 1. Then, put ň = ň j and np = np - ѓ and obtain the new ѓ using
(15) with the belief that the actual coalition size is ň. Adjust S on the basis of ň and ѓ, and go to
Step 2. The adjustment of S is straightforward except for the length of the remaining punishment
periods that is done as follows: The new estimate ň reveals a new length of punishment periods
in Theorem 2, say, ѓ. The number of remaining punishment periods is then readjusted as n1p = (ѓ
=г)np + ѓ.
PDM4 starts with the normal phase when the defender believes that the coalition size is ň0, and
he checks this belief continually. If the belief is correct, the defense mechanism goes on as it was
initiated, i.e., with the parameters adjusted for the coalition size ň0. Otherwise, it goes to the
punishment phase with the belief that the coalition size is ň j, and the parameters are readjusted
for this new coalition size. Moreover, a number of extra punishment periods ѓ is considered to
remove the benefit obtained by the attacker during the learning process. This procedure
continues until arriving at a firm decision about the coalition size, where the parameters of the
defense mechanism are certainly known.
The learning process lasts for f(ňi0)+ f (ňi1)+. . .+ f (ňis) periods, where i0; i1; . . . ; and is are the
values the variable j takes before determining the actual coalition size n_ (evidently, i0 = 0). In
other words, ňis is the last estimation for which a bad event has occurred. The attacker’s benefit
from performing the attack with less than his maximum power, i.e., his profit during the learning
Page | 48
Y9MC03031
process, is calculated as follows: The attacker’s payoff when he randomly answers the puzzle of
the punishment phase designed on the basis of the actual coalition size n* is pp + vp (as stated
earlier, the cost to the defender in producing or verifying a puzzle is almost independent of
puzzle’s difficulty level, and so, we have used pp and vp instead of ppi and vpj) Therefore, the
attacker’s maximum benefit during the learning process is
Where 2k is the attacker’s payoff in solving the difficult puzzle designed on the basis of the
coalition size ňik . Thus, for a discount factor near the unity, the extra punishment periods with the actual difficult puzzle P*2 that cause the attacker to comply with the equilibrium prescription is obtained from
where 2 is the attacker’s payoff in playing the strategy profile ο (P2;RA)(1 - )ο(P1;RA) (α ο (P1;CA)(1 - α)ο(P1;RA)) with the parameters obtained from the actual coalition size. It is evident that 2 = pp + vp + (1 - )(m -*sp1), where sp1 is the simple puzzle for the actual coalition size. Therefore, (14) is reduced to
By such extra punishment periods, the attackers gain nothing from performing the attack with
less than his maximum power. Note that a fair learning process should satisfy
This implise that the number of persumed coalition size r + 1 should decrease with m. Fig. 7
shows a PDM4.
Page | 49
Y9MC03031
This section discusses some aspects of the puzzle-based defense mechanisms proposed in this
paper and outlines future researches in the game-theoretic study of the client puzzle approach. It
also compares these mechanisms with some of the earlier puzzle-based defenses against flooding
attacks. If a puzzle imposes a number of computational steps, the resource engaged by the
attacker is the CPU time, and the corresponding metric reflects the number of CPU clocks the
attacker spends on solving the puzzle. If a puzzle imposes memory accesses, the metric identifies
the amount of the memory used. It is worth noting that a designer should specify the resources,
their metric spaces, their corresponding reference distances, and the capabilities of a prospective
attacker according to the application he intends to protect against flooding attacks. This implies
that the practical implementation of a puzzle-based mechanism may vary from an application to
another one.
There are some weaknesses in the earlier puzzle-based mechanisms that are resolved in the
current paper. In the challenge-response approach [27], upon receiving a request, the defender is
required to respond with a puzzle of the current highest level of difficulty. The defender allocates
resources only if he receives the correct solution from the requester. By adapting the puzzle
difficulty levels proportional to the current load, the defender can force clients to solve puzzles
of varying difficulty levels. As stated earlier, such an approach does not care about the quality of
service for the legitimate users. Furthermore, when the defender’s current load is low, he may
produce simple puzzles for a large number of incoming requests sent by the attack coalition,
since the requests themselves do not change his load. The attacker can then solve these
ineffective puzzles to deplete the defender’s resources. This is not the case for the mechanisms
proposed in this paper (see Section 5.3). A conservative defender, who uses the challenge-
response mechanism, may resolve this by issuing the puzzles more difficult than the ones
indicated by his current load. Nevertheless, lack of a suitable procedure for this issue may result
in unnecessary difficult puzzles and, consequently, a low quality of service for the legitimate
users. In the puzzle auctions protocol [28], given a request, the defender, according to his current
load, can either accept the request and continue with the protocol or send a rejection to the
requester. The latter may then increase the difficulty level of the puzzle and send the request
back to the defender. Upon the receipt of a rejection, the requester solves the puzzle with double
the difficulty level of the last rejected one (he computes one further zero bit using hash
Page | 50
Y9MC03031
operations). In this way, a legitimate user may encounter repeated rejections until reaching an
acceptable difficulty level. Therefore, the approach should be supplemented by a suitable
mechanism to estimate the appropriate difficulty level. Furthermore, in the puzzle auctions
protocol, the requester is required to choose a maximum number of hash operations he is willing
to perform for solving a puzzle. If this number is not accurately chosen, the requester may not
attain the service when there is a flooding attack.
The mechanisms proposed in the current paper are based on a number of assumptions. For
example, it is assumed that N(pp + vp) ≤ 1. In other words, the defender should be able to
produce the puzzles and verify the solutions in an efficient way. According to the fairness
property stated in Definition 5, this is a necessary condition for a defense mechanism to be
effective. A similar assumption has also been made in the earlier puzzle-based defenses.
Another assumption is that the defender is at least capable of sending reply messages to the
origins of incoming requests. All the earlier puzzle-based defenses are also based on such an
assumption [22], [28]. This seemingly restricts the applicability of the proposed mechanisms in
the case of bandwidth exhaustion attacks in which the attacker sends a huge number of service
requests to deplete the victim’s bandwidth. However, it can be envisioned that by coordinating
multiple routers installed with the defense mechanisms proposed in this paper, one can restrain
the attack flows before they converge to the victim. The proposal of such an approach has been
suggested in [30]. Nevertheless, the game-theoretic approach employed in the current paper is
not sufficient for handling such a case. For example, providing incentives for the routers to
cooperate in the defense is an important issue deserving further researches. More specifically, it
can be studied through the cooperative game theory.
Another assumption made in this paper is the complete rationality of the players. Evidently, the
defense strategies proposed in this paper may not be optimal if the attacker has a bounded level
of rationality. In other words, the defender can gain payoffs better than the ones attainable by the
mechanisms of this paper when his opponent is not completely rational. Again, game theory has
specific solutions for such circumstances as well.
Page | 51
Y9MC03031
Fig. 7. PDM4—The defense mechanism against distributed flooding attacks where
the size of attack coalition is unknown. For a given coalition size, the function F
returns the elements of S = (sp1; sp2; г; m;; ; np) except for np. The function f
determines the length of the reprieve period for a given coalition size.
Page | 52
File Dialog
File Dialog
Select The Source FileSelect The Source File
Fixed Size
of Packet
Fixed Size
of PacketPDM3(Scan only Particular size)
PDM3(Scan only Particular size)
Out of
size
Out of
size
PDM4(scan any size)
PDM4(scan any size)
ServerServer
ServerServer
ClientClient
Y9MC03031
Page | 53
Y9MC03031
This paper utilizes game theory to propose a number of puzzle-based defenses against flooding
attacks. It is shown that the interactions between an attacker who launches a flooding attack and
a defender who counters the attack using a puzzle-based defense can be modeled as an infinitely
repeated game of discounted payoffs. Then, the solution concepts of this type of games are
deployed to find the solutions, i.e., the best strategy a rational defender can adopt in the face of a
rational attacker. In this way, the optimal puzzle-based defense strategies are developed. More
specifically, four defense mechanisms are proposed. PDM1 is derived from the open-loop
solution concept in which the defender chooses his actions regardless of what happened in the
game history. This mechanism is applicable in defeating the single-source and distributed
attacks, but it cannot support the higher payoffs being feasible in the game. PDM2 resolves this
by using the closed-loop solution concepts, but it can only defeat a single-source attack. PDM3
extends PDM2 and deals with distributed attacks. This defense is based on the assumption that
the defender knows the size of the attack coalition. Finally, in PDM4, the ultimate defense
mechanism is proposed in which the size of the attack coalition is assumed unknown.
A complete flooding attack solution is likely to require some kind of defense during the attack
traffic identification. The mechanisms of this paper can provide such defenses. On the other
hand, the estimations made by a reactive mechanism can be used in tuning the mechanisms
proposed in this paper.
Page | 54
Y9MC03031
DESIGN
Page | 55
Y9MC03031
Page | 56
Y9MC03031
Unified Modeling Language
The heart of object-oriented problem solving is the construction of a model. The model abstracts
the essential details of the underlying problem from its usually complicated real world. Several
modeling tools are wrapped under the heading of the UML™, which stands for Unified
Modeling Language™. The purpose of this course is to present important highlights of the UML.
At the center of the UML are its nine kinds of modeling diagrams, which we describe here.
Use case diagrams
Class diagrams
Object diagrams
Sequence diagrams
Collaboration diagrams
State chart diagrams
Activity diagrams
Component diagrams
Deployment diagrams
Some of the sections of this course contain links to pages with more detailed information. And
every section has short questions. Use them to test your understanding of the section topic.
Why is UML important?
Let's look at this question from the point of view of the construction trade. Architects design
buildings. Builders use the designs to create buildings. The more complicated the building, the
more critical the communication between architect and builder. Blueprints are the standard
graphical language that both architects and builders must learn as part of their trade.
Page | 57
Y9MC03031
Writing software is not unlike constructing a building. The more complicated the underlying
system, the more critical the communication among everyone involved in creating and deploying
the software. In the past decade, the UML has emerged as the software blueprint language for
analysts, designers, and programmers alike. It is now part of the software trade. The UML gives
everyone from business analyst to designer to programmer a common vocabulary to talk about
software design.
The UML is applicable to object-oriented problem solving. Anyone interested in learning UML
must be familiar with the underlying tenet of object-oriented problem solving -- it all begins with
the construction of a model. A model is an abstraction of the underlying problem. The domain is
the actual world from which the problem comes.
Models consist of objects that interact by sending each other messages. Think of an object as
"alive." Objects have things they know (attributes) and things they can do (behaviors or
operations). The values of an object's attributes determine its state.
Classes are the "blueprints" for objects. A class wraps attributes (data) and behaviors (methods
or functions) into a single distinct entity. Objects are instances of classes.
Group Term Definition
Business Accounting Periods A defined period of time whereby
performance reports may be extracted.
(Normally 4 week periods).
Technical Association A relationship between two or more
entities. Implies a connection of some
type - for example one entity uses the
services of another, or one entity is
connected to another over a network link.
Technical Class A logical entity encapsulating data and
behavior. A class is a template for an
object - the class is the design, the object
the runtime instance.Page | 58
Y9MC03031
Technical Component Model The component model provides a
detailed view of the various hardware and
software components that make up the
proposed system. It shows both where
these components reside and how they
inter-relate with other components.
Component requirements detail what
responsibilities a component has to
supply functionality or behavior within
the system.
Business Customer A person or a company that requests An
entity to transport goods on their behalf.
Technical Deployment Architecture A view of the proposed hardware that will
make up the new system, together with
the physical components that will
execute on that hardware. Includes
specifications for machine, operating
system, network links, backup units &etc.
Technical Deployment Model A model of the system as it will bephysically deployed
Technical Extends Relationship A relationship between two use cases in
which one use case 'extends' the
behavior of another. Typically this
represents optional behavior in a use
case scenario - for example a user may
optionally request a list or report at some
point in a performing a business use
case.
Technical Includes Relationship A relationship between two use cases in
which one use case 'includes' the
behavior. This is indicated where there a
specific business use cases which are
used from many other places - for
Page | 59
Y9MC03031
Technical Use Case A Use Case represents a discrete unit of
interaction between a user (human or
machine) and the system. A Use Case is a
single unit of meaningful work; for
example creating a train, modifying a
train and creating orders are all Use
Cases.Each Use Case has a description
which describes the functionality that will
be built in the proposed system. A Use
Case may 'include' another Use Case's
functionality or 'extend' another Use Case
with its own behavior.Use Cases are
typically related to 'actors'. An actor is a
human or machine entity that interacts
with the system to perform meaningful
work.
Actors
Figure 2: Actors
A person may perform the role of more than one Actor, although they will only assume
one role during one use case interaction.
Page | 60
Y9MC03031
An Actor role may be performed by a non-human system, such as another computer
program.
Use-Cases:
We have identified 2 actors in these diagrams, the actual Machine Users and the Unix
Developers. The Machine user can; begin using the system – this represents whichever method
the user will use in order to make initial interaction with the system. For example, they may need
to turn the system on via a button, simply turn the key in the ignition or some over method. They
can also view a page, click on a link or back button, scroll up and down and close the system.
The Microsoft Developer inherits all these use cases, as well as being able to upload an html file
and view a list of problems.
Page | 61
Y9MC03031
Class Diagram:
We have identified 5 classes in total. A Lexer class and a Parser class - which comprise the
Analyser package – a ParsedTreeStructure class, a Renderer class and a Frontend class. The
Lexer’s job is to build a set of tokens from a source file. The Parser uses these tokens built and
deciphers their types. It then builds the tokens seen into nodes and parses them to the
ParsedTreeStructure class, where a tree structure of nodes is stored. This tree is then used by the
Renderer class to form a model of the page, which is in turn, is used by the Frontend in order to
display the final rendered page.
Activity Diagrams:
These activity diagrams show how the use-cases interact with the system and interface. The User
starts by initially interacting with the system. The main page is then rendered by the system and
it is displayed by the interface, which the user can view. From here the user can click on a link,
scroll or close the system. If they chose to click a link, the system renders the new page and it is
displayed by the interface, which brings the user back to viewing. If the user chooses to scroll,
Page | 62
Y9MC03031
the system will readjust the page and the interface will display the new snapshot of the page,
which also brings the user back to viewing. If the user chooses to close the system, the activity
diagram finishes in an exit state.
The Unix Developer can do all of the above as it inherits all of the Machine User’s use-cases. On
top of this they can upload an html file, which will then begin to be rendered by the system. If
problems are found with the code, the interface will display a list of problems which the
developer can view. Otherwise, if no problems are found, the page will be fully rendered by the
system, and then displayed by the interface, which the developer can view. Once developer can
view the page or problems, they can then decide to load a new html file. If they do decide to,
they go back to ‘Upload New File,’ if not, the activity reaches an exit state.
Page | 63
Y9MC03031
Sequence Diagram:
A sequence diagram shows, as parallel vertical lines ( lifelines), different processes
or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur. This allows the specification of simple runtime scenarios
in a graphical manner. For instance, the UML 1.x diagram on the right describes the sequences
of messages of a (simple) restaurant system. This diagram represents a Patron ordering food and
wine, drinking wine then eating the food, and finally paying for the food. The dotted lines
extending downwards indicate the timeline. Time flows from top to bottom. The arrows
represent messages (stimuli) from an actor or object to other objects. For example, the Patron
sends message 'pay' to the Cashier. Half arrows indicate asynchronous method calls. The UML
2.0 Sequence Diagram supports similar notation to the UML 1.x Sequence Diagram with added
support for modeling variations to the standard flow of events.
Collaboration Diagram:
This diagram shows the collaboration between the 5 classes and demonstrates the order
of events that will take place when a page is loaded. The Lexer will build tokens based on the
text it has been given, the Parser then deciphers types of tokens that are generated by the Lexer
class. These are then parsed to the Tree Structure in the form of nodes, which are then assembled
Page | 64:Frontend
4: displayPage()
Y9MC03031
into a tree structure. Next the Renderer class uses these nodes to build the page and finally the
Frontend class takes what the Renderer has created and displays it for the user.
State Diagram:
State diagrams are used to give an abstract description of the behavior of a system.
Page | 65
Y9MC03031
This behavior is analyzed and represented in series of events, that could occur in one or more
possible states. Hereby "each diagram usually represents objects of a single class and track the
different states of its objects through the system.
State diagrams can be used to graphically represent finite state machines. This was introduced by
Taylor Booth in his 1967 book "Sequential Machines and Automata Theory". Another possible
representation is the State transition table.
Component Diagram:
Components are wired together by using an assembly connector to connect the required interface
of one component with the provided interface of another component. This illustrates the service
consumer - service provider relationship between the two components.
Page | 66
Y9MC03031
An assembly connector is a "connector between two components that defines that one
component provides the services that another component requires. An assembly connector is a
connector that is defined from a required interface or port to a provided interface or port.
Deployment diagram:
Deployment Diagram shows the configuration of run time processing nodes and the components
that live on them. It is used for modeling topology of the hardware on which your system
executes.
A deployment diagram in the Unified Modeling Language models the physical deployment of
artifacts on nodes. To describe a web site, for example, a deployment diagram would show what
hardware components ("nodes") exist (e.g. a web server, an application server, and a database
server), what software components ("artifacts") run on each node (e.g. web application,
database), and how the different pieces are connected
Page | 67
Y9MC03031
Data Flow Diagram
A graphical tool used to describe and analyze the moment of data through a system manual or
automated including the process, stores of data, and delays in the system. Data Flow Diagrams
are the central tool and the basis from which other components are developed. The
transformation of data from input to output, through processes, may be described logically and
independently of the physical components associated with the system. The DFD is also known as
a data flow graph or a bubble chart.
DFDs are the model of the proposed system. They clearly should show the requirements on
which the new system should be built. Later during design activity this is taken as the basis for
drawing the system’s structure charts. The Basic Notation used to create a DFD’s are as follows:
1. Dataflow: Data move in a specific direction from an origin to a destination.
2. Process: People, procedures, or devices that use or produce (Transform) Data. The
physical component is not identified.
3. Source: External sources or destination of data, which may be People, programs,
organizations or other entities.
4. Data Store: Here data are stored or referenced by a process in the System.
Page | 68
Y9MC03031
Several rules of constructing a DFD:
Process should be named and numbered for an easy reference. Each name should be
representative of the process.
The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source. An alternative way is to repeat the
source symbol as a destination. Since it is used more than once in the DFD it is marked
with a short diagonal.
When a process is exploded into lower level details, they are numbered.
The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized.
A DFD typically shows the minimum contents of data store. Each data store should contain all
the data elements that flow in and out. Questionnaires should contain all the data elements that
flow in and out. Missing interfaces redundancies and like is then accounted for often through
interviews.
Salient features of DFD’s:
The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
The DFD does not indicate the time factor involved in any process whether the data flows
take place daily, weekly, monthly or yearly.
The sequence of events is not brought out on the DFD.
Types of data flow diagrams
1. Current Physical
2. Current Logical
3. New Logical
4. New Physical
Page | 69
Y9MC03031
1 Current physical:
In Current Physical DFD process label include the name of people or their positions or the
names of computer systems that might provide some of the overall system-processing label
includes an identification of the technology used to process the data. Similarly data flows
and data stores are often labels with the names of the actual physical media on which data
are stored such as file folders, computer files, business forms or computer tapes.
2 Current logical:
The physical aspects at the system are removed as much as possible so that the current
system is reduced to its essence to the data and the processors that transform them regardless
of actual physical form. \
3 New logical:
This is exactly like a current logical model if the user were completely happy with the user
were completely happy with the functionality of the current system but had problems with
how it was implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and inefficient
flows recognized.
4 New physical:
The new physical represents only the physical implementation of the new system.
Rules governing the DFD’s
Process:
No process can have only outputs.
No process can have only inputs. If an object has only inputs then it must be a sink.
A process has a verb phrase label.
Page | 70
Y9MC03031
Data store:
Data cannot move directly from one data store to another data store, a process must move
data.
Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
A data store has a noun phrase label.
Source or sink:
The origin and destination of the data.
Data cannot move direly from a source to sink it must be moved by a process
A source and /or sink has a noun phrase land
Data flow
A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The latter
is usually indicated however by two separate arrows since these happen at different type.
A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.
A data flow cannot go directly back to the same process it leads. There must be at least
one other process that handles the data flow produce some other data flow returns the
original data into the beginning process.
A Data flow to a data store means update (delete or change).
A data Flow from a data store means retrieve or use.
A data flow has a noun phrase label more than one data flow noun phrase can appear on a
single arrow as long as all of the flows on the same arrow move together as one package.
Rules for DFD:
Fix the scope of the system by means of context diagrams.
Page | 71
Y9MC03031
Organize the DFD so that the main sequence of the actions reads left to right and top to
bottom.
Identify all inputs and outputs.
Identify and label each process internal to the system with rounded circles.
A process is required for all the data transformation and transfers. Therefore, never
connect a data store to a data source or the destinations or another data store with just a
data flow arrow.
Do not indicate hardware and ignore control information.
Make sure the names of the processes accurately convey everything the process is done.
There must not be unnamed process.
Indicate external sources and destinations of the data, with squares.
Number each occurrence of repeated external entities.
Identify all data flows for each process step, except simple Record retrievals.
Label data flow on each arrow.
Use details flow on each arrow to indicate data movements.
There can’t be unnamed data flow.
A data flow can’t connect two external entities.
Levels Of DFD:
The complexity of the business system means that it is a responsible to represent the operations
of any system of single data flow diagram. At the top level, an Overview of the different systems
in an organization is shown by the way of context analysis diagram. When exploded into DFD
they are represented by:
Level-0: SYSTEM INPUT/OUTPUT
Level-1: SUBSYSTEM LEVEL DATAFLOW FUNCTIONAL
Level-2: FILE LEVEL DETAIL DATA FLOW.
The input and output data shown should be consistent from one level to the next.
Level-0: System input/output Level
Page | 72
Y9MC03031
A level-0 DFD describes the system-wide boundaries, dealing inputs to and outputs from the
system and major processes. This diagram is similar to the combined user-level context diagram.
Level-1: Subsystem Level Data Flow
A level-1 DFD describes the next level of details within the system, detailing the data flows
between subsystems, which make up the whole.
Level-2: File Level Detail Data Flow
Page | 73
1.0 Network process
Packet Receiver 3.0
Packet Sender
1.0
Packet Server 2.0
Result
Y9MC03031
All the projects are feasible given unlimited resources and infinite time. It is both necessary and prudent
to evaluate the feasibility of the project at the earliest possible time. Feasibility and the risk analysis are
pertained in many ways. If risk of the project is great.
Level-3: File Level Detail Data Flow
Feasibility and the risk analysis are pertained in many ways. If risk of the project is great.
Form Design
Page | 74
1.1
Result3.1
Packet Receiver
3.0
Packet Sender
1.0
1.21.1
Result 3.1
Packet Receiver
3.0
PDM1PDM2PDM3PDM4
Mechanism
Packet Sender
1.0Scanner
Y9MC03031
This introduction to using Swing in Java will walk you through the basics of Swing. This covers
topics of how to create a window, add controls, postion the controls, and handle events from the
controls.
The Main Window
Almost all GUI applications have a main or top-level window. In Swing, such window is usually
instance of JFrame or JWindow. The difference between those two classes is in simplicity –
JWindow is much simpler than JFrame (most noticeable are visual differences - JWindow does
not have a title bar, and does not put a button in the operating system task bar). So, your
applications will almost always start with a JFrame.
Though you can instantiate a JFrame and add components to it, a good practice is to encapsulate
and group the code for a single visual frame in a separate class. Usually, I subclass the JFrame
and initialize all visual elements of that frame in the constructor.
Always pass a title to the parent class constructor – that String will be displayed in the title bar
and on the task bar. Also, remember to always initialize frame size (by calling setSize (width,
height)), or your frame will not be noticeable on the screen.
package com.neuri.handsonswing.ch1;
import javax.swing.JFrame;
public class MainFrame extends JFrame
{
public MainFrame()
{
super("My title");
setSize(300, 300);
}
}
Now you have created your first frame, and it is time to display it. Main frame is usually
displayed from the main method – but resist the urge to put the main method in the frame class.
Page | 75
Y9MC03031
Always try to separate the code that deals with visual presentation from the code that deals with
application logic – starting and initializing the application is part of application logic, not a part
of visual presentation. A good practice is to create an Application class that will contain
initialization code.
package com.neuri.handsonswing.ch1;
public class Application
{
public static void main(String[] args)
{
// perform any initialization
MainFrame mf = new MainFrame();
mf.show();
}
}
If you run the code now, you will see an empty frame. When you close it, something not quite
obvious will happen (or better said, will not happen). The application will not end. Remember
that the Frame is just a visual part of application, not application logic – if you do not request
application termination when the window closes, your program will still run in the background
(look for it in the process list). To avoid this problem, add the following line to the MainFrame
constructor:
SetDefaultCloseOperation (JFrame.EXIT_ON_CLOSE);
Page | 76
Y9MC03031
Before Java2 1.3, you had to register a window listener and then act on the window closing event
by stopping the application. Since Java2 1.3, you can specify a simple action that will happen
when a window is closed with this shortcut. Other options are HIDE_ON_CLOSE (the default –
window is closed but application still runs) and DO_NOTHING_ON_CLOSE (rather strange
option that ignores a click on the X button in the upper right corner).
Adding Components
Now is the time to add some components to the window. In Swing (and the Swing predecessor,
AWT) all visual objects are subclasses of Component class. The Composite pattern was applied
here to group visual objects into Containers, special components that can contain other
components. Containers can specify the order, size and position of embedded components (and
this can all be automatically calculated, which is one of the best features of Swing).
JButton is a component class that represents a general purpose button – it can have a text caption
or an icon, and can be pressed to invoke an action. Let’s add the button to the frame (note: add
imports for javax.swing.* and java.awt.* to the MainFrame source code so that you can use all
the components).
When you work with JFrame, you want to put objects into it’s content pane – special container
intended to hold the window contents. Obtain the reference to that container with the
getContentPane() method.
Container content = getContentPane();
content.add(new JButton("Button 1"));
If you try to add more buttons to the frame, most likely only the last one added will be displayed.
That is because the default behavior of JFrame content pane is to display a single component,
resized to cover the entire area.
Page | 77
Y9MC03031
Grouping Components
To put more than one component into a place intended for a single component, group them into a
container. JPanel is a general purpose container, that is perfect for grouping a set of components
into a “larger” component. So, let’s put the buttons into a JPanel:
JPanel panel=new JPanel();
panel.add(new JButton("Button 1"));
panel.add(new JButton("Button 2"));
panel.add(new JButton("Button 3"));
content.add(panel);
Page | 78
Y9MC03031
Layout Management Basics
One of the best features of Swing is automatic component positioning and resizing. That is
implemented trough a mechanism known as Layout management. Special objects – layout
managers – are responsible for sizing, aligning and positioning components. Each container can
have a layout manager, and the type of layout manager determines the layout of components in
that container. There are several types of layout managers, but the two you will most frequently
use are FlowLayout (orders components one after another, without resizing) and BorderLayout
(has a central part and four edge areas – component in the central part is resized to take as much
space as possible, and components in edge areas are not resized). In the previous examples, you
have used both of them. FlowLayout is the default for a JPanel (that is why all three buttons are
displayed without resizing), and BorderLayout is default for JFrame content panes (that is why a
single component is shown covering the entire area).
Page | 79
Y9MC03031
Layout for a container is defined using the setLayout method (or usually in the constructor). So,
you could change the layout of content pane to FlowLayout and add several components, to see
them all on the screen.
The best choice for the window content pane is usually a BorderLayout with a central content
part and a bottom status (or button) part. The top part can contain a toolbar, optionally.
Now, let’s combine several components and layouts, and introduce a new component –
JTextArea. JTextArea is basically a multiline editor. Initialize the frame content pane explicitly
to BorderLayout, put a new JTextArea into the central part and move the button panel below.
package com.neuri.handsonswing.ch1;
import java.awt.*;
import javax.swing.*;
public class MainFrame extends JFrame
{
public MainFrame()
{
Page | 80
Y9MC03031
super("My title");
setSize(300,300);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
Container content = getContentPane();
content.setLayout(new BorderLayout());
JPanel panel = new JPanel(new FlowLayout());
panel.add(new JButton("Button 1"));
panel.add(new JButton("Button 2"));
panel.add(new JButton("Button 3"));
content.add(panel, BorderLayout.SOUTH);
content.add(new JTextArea(), BorderLayout.CENTER);
}
}
Notice that the layouts for content pane and the button panel are explicitly defined. Also notice
the last two lines of code – this is the other version of add method, which allows you to specify
the way the component is added. In this case, we specify the area of BorderLayout layout
manager. Central part is called BorderLayout.CENTER, and other areas are called
BorderLayout.NORTH (top), BorderLayout.SOUTH (bottom), BorderLayout.WEST (left) and
BorderLayout.EAST (right). If you get confused about this, just remember land-maps from your
geography classes.
Page | 81
Y9MC03031
SAMPLE CODE
Page | 82
Y9MC03031
/***********************************************************//* PacketQueue *//* *//***********************************************************/import java.awt.*;import java.awt.event.*;import javax.swing.*;import java.net.*;import java.io.*;/** * Summary description for PacketQueue * */public class PacketQueue extends JFrame{
// Variables declarationprivate JLabel jLabel1;private JLabel jLabel2;private JTextArea jTextArea1;private JScrollPane jScrollPane1;private JButton jButton1;private JPanel contentPane;public int i,j,a,l;public long st,flen;public float filelength;public String filtfr[];public String filmer[];public char pakch[][];public int loss;ServerSocket ss;ServerSocket ss1;Socket so;Socket so1;
// End of variables declaration
public PacketQueue(){super();initializeComponent();//// Add any constructor code after initializeComponent call
this.setVisible(true);try{
ss=new ServerSocket(4500);ss1=new ServerSocket(4501);
Page | 83
Y9MC03031
while(true){so=ss.accept();System.out.println("*****Packets Are Arriving From The Source*********");jTextArea1.setText("\nPackets Recieving Started");
jTextArea1.append("\n\n**************************\n\n");DataInputStream dis=new DataInputStream(so.getInputStream());st=dis.readLong();flen=dis.readLong();filelength=dis.readInt();filtfr=new String[(int)filelength];filmer=new String[(int)filelength];pakch=new char[1000][1000];for(i=0;i<filelength;i++){
filtfr[i]=dis.readUTF(); jTextArea1.append("\nReceiveing Packet : ["+i+"] = "+filtfr[i]);
System.out.println("Packet : ["+i+"] = "+filtfr[i]);}
jTextArea1.append("\n\n**************************\n\n");jTextArea1.append("\nPackets Recieving Completed");
jTextArea1.append("\nPackets Sending Started");jTextArea1.append("\n\n**************************\n\n");
System.out.println("Flen = "+flen);System.out.println("Filelength = "+filelength);
l=(int)(Math.random()*filelength);
so1=ss1.accept();DataOutputStream dos=new DataOutputStream( so1.getOutputStream());dos.writeLong(st);dos.writeLong(flen);dos.writeInt((int)filelength);for(i=0;i<filelength;i++){
if(i==l){
}else{
dos.writeUTF(filtfr[i]);jTextArea1.append("\nSending Packet : ["+i+"] = "+filtfr[i]);
}}
jTextArea1.append("\n\n**************************\n\n");jTextArea1.append("\nPackets Sending Completed");
Page | 84
Y9MC03031
System.out.println("\nPackets Sending Completed");}
}catch(Exception df){
df.printStackTrace();}
}/**
* This method is called within the constructor to initialize the form.*/private void initializeComponent(){
jLabel1 = new JLabel();jLabel1.setFont(new Font("Arial",Font.BOLD,15));jLabel1.setForeground(new Color(0, 0, 102));jLabel2 = new JLabel();jLabel2.setFont(new Font("Arial",Font.BOLD,12));jLabel2.setForeground(new Color(0, 0, 102));jTextArea1 = new JTextArea();jTextArea1.setFont(new Font("Arial",Font.BOLD,12));jTextArea1.setForeground(new Color(0, 0, 102));jScrollPane1 = new JScrollPane();jButton1 = new JButton();jButton1.setForeground(new Color(0, 0, 102));contentPane = (JPanel)this.getContentPane();
//// jLabel1//jLabel1.setText("SERVER");//// jLabel2//jLabel2.setText("Status Information");//// jTextArea1////// jScrollPane1//jScrollPane1.setViewportView(jTextArea1);//// jButton1//jButton1.setText("Exit");
Page | 85
Y9MC03031
jButton1.addActionListener(new ActionListener() {public void actionPerformed(ActionEvent e){
jButton1_actionPerformed(e);}
});//// contentPane//
ImageIcon back=new ImageIcon("ba1.jpg");JLabel bag=new JLabel(back);//c.setBackground(new Color(125,116,102));//bag.setBounds(0,0,1280,1024);//this.add(jLabel1);
contentPane.setLayout(null);contentPane.setBackground(new Color(255, 255, 255));addComponent(contentPane, jLabel1, 245,14,166,40);addComponent(contentPane, jLabel2, 220,70,207,33);addComponent(contentPane, jScrollPane1, 23,111,539,387);addComponent(contentPane, jButton1, 480,515,83,28);addComponent(contentPane,bag,0,0,1280,1024);//// PacketQueue//this.setTitle("PacketQueue");this.setLocation(new Point(0, 0));this.setSize(new Dimension(590, 580));
this.setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE);}
/** Add Component Without a Layout Manager */private void addComponent(Container container,Component c,int x,int y,int width,int height) {
c.setBounds(x,y,width,height);container.add(c);
}
////Add any appropriate code in the following Event Handling Methods//private void jButton1_actionPerformed(ActionEvent e){
System.out.println("Exit");// TODO: Add any handling code here
Page | 86
Y9MC03031
System.exit(1);}////Add any method code to meet your needs in the following area//
//===================== Testing ==================////= =////= The following main method is just for testing this class you built.=////= After testing,you may simply delete it. =////=============================================//public static void main(String[] args){
new PacketQueue();}
//= End of Testing }
Page | 87
Y9MC03031
SYSTEM SECURITY MEASURES
(Implementation Of Security For The S/W Developed)
Security in software engineering a broad topic. This script limits its scope to defining and
discussing software security, software reliability, developer responsibility, and user
responsibility.
Computer Systems Engineering
Page | 88
Y9MC03031
Software security applies information security principles to software development. Information
security is commonly defined as "the protection of information systems against unauthorized
access to or modification of information, whether in storage, processing or transit, and against
the denial of service to authorized users of the provision of service to unauthorized users,
including those measures necessary to detect, document, and counter such threats."
Many questions regarding security are related to the software life cycle itself. In particular, the
security of code and software processes must be considered during the design and development
phase. In addition, security must be preserved during operation and maintenance to ensure the
integrity of a piece of software.
The mass of security functionality employed by today's networked world, might deceive us into
believing that our jobs as secure system designers are already done. However, computers and
networks are incredibly insecure. The lack of security stems from two fundamental problems.
Systems, which are theoretically secure, may not be secure in practice. Furthermore, systems are
increasingly complex; complexity provides more opportunities for attacks. It is much easier to
prove that a system is insecure than to demonstrate that one is secure to prove insecurity, one
simply exploits certain system vulnerability. On the other hand, proving a system secure requires
demonstrating that all possible exploits can be defended against (a very daunting, if not
impossible, task).
Good Practice
Security requires more managing and mitigating risk than it does technology. When developing
software one must first determine the risks of a particular application. For example, today's
typical web site may be subject to a variety of risks, ranging from defacement, to distributed
denial of service (DDoS, described in detail later) attacks, to transactions with the wrong party.
Once the risks are identified, identifying appropriate security measures becomes tractable. In
particular, when defining requirements, it is important to consider how the application will be
used, who will be using the application, etc. With that knowledge, one can decide whether or not
to support complex features like auditing, accounting, no repudiation, etc.
Page | 89
Y9MC03031
Another potentially important issue is how to support naming. The rise of
distributed systems has made naming increasingly important. Naming is typically
handled by rendezvous: a principal exporting a name advertises it somewhere, and
someone wishing to use that name searches for it (phone books and directories are
examples). For example, in a system such as a resource discovery system, both the
resources and the individuals using those resources must be named. Often there are
tradeoffs with respect to naming: while naming can provide a level of indirection,
it also can create additional problems if the names are not stable. Names can allow
principals to play different roles in a particular system, which can also be useful.
Page | 90
Y9MC03031
SYSTEM TESTING
Page | 91
Y9MC03031
Testing is the major quality control measure employed during software development. Its
basic function is to detect errors in the software. During requirement analysis and design, the
output is a document that is usually textual and non-executable. After the coding phase,
computer programs are available that can be executed for testing phases. This implies that testing
not only has to uncover errors introduced during coding, but also errors introduced during the
previous phases. Thus, the goal of testing is to uncover requirement, design or coding errors in
the programs.
Consequently, different levels of testing are employed. The starting point of testing is Unit
Testing. In this a module is tested separately and is often performed by the coder himself
simultaneously with the coding of the module. The purpose is to execute the different parts of the
module code to detect coding errors. After this the modules are gradually integrated into
subsystem, which are then integrated themselves eventually form the entire system. During
integration of modules, Integration Testing is performed. The goal of this testing is to detect
design errors, while focusing on testing the interconnection between modules. After the system is
put together, system testing is performed. Here the system is tested against tech system
requirements to see if all the requirements are met and the system performs as specified by the
requirements. Finally, acceptance testing is performed to demonstrate to the client, on the real
life data of the client, the separation of the system.
For testing to be successful, proper selection of test cases is essential. There are two different
approaches to selecting test cases-functional testing and structural testing. In functional testing
the software for the module to be tested is treated as black box, and then test cases are decided
based on the specifications of the system or module. For this reason, this form of testing is also
called Black Box Testing. The focus is on testing the external behavior of the system. In
structural testing the test cases are decided based on the logic of the module to be tested.
Structural testing is sometimes called "glass box testing". Structural testing is used for lower
levels of testing and functional testing is used for higher levels.
Testing is an extremely critical and time-consuming activity. It requires proper planning of the
overall testing process. Frequently the testing process starts with the test plan. This plan
Page | 92
Y9MC03031
identifies all the testing related activities that must be performed and specifies the schedule,
allocates the resources, and specify guidelines for testing. The test plan specifies manner in
which the modules will integrate together. Then for different test units, a test case specification
document is produced, which lists all the different test cases, together with the expected outputs,
that will be used for testing. During the testing of the unit, the specified test cases are executed
and actual result is compared with the expected output. The final output of the testing phases is
to the text report and the error report, or set of such reports (one of each unit is tested). Each test
report contains the set of such test cases and the result of executing the code with these test cases
The error report describes the errors encountered and action taken to remove those errors.
Fundamentals of Software Testing
Testing is basically a process to detect errors in the software product. Before going into the
details of testing techniques one should know what errors are. In day-to-day life we say
whenever something goes wrong there is an error. This definition is quite vast. When we apply
this concept to software products then we say whenever there is difference between what is
expected out of software and what is being achieved, there is an error.
For the output of the system, if it differs from what was required, it is due to an error. This output
can be some numeric or alphabetic value, some formatted report, or some specific behavior from
the system. In case of an error there may be change in the format of out, some unexpected
behavior from system, or some value different from the expected is obtained. These errors can
due to wrong analysis, wrong design, or some fault on developer's part.
All these errors need to be discovered before the system is implemented at the customer's site.
Because having a system that does not perform as desired be of no use. All the effort put in to
build it goes waste. So testing is done. And it is equally important and crucial as any other stage
of system development. For different types of errors there are different types of testing
techniques. In the section that follows we'll try to understand those techniques.
Page | 93
Y9MC03031
Objectives of testing
First of all the objective of the testing should be clear. We can define testing as a process of
executing a program with the aim of finding errors. To perform testing, test cases are designed.
A test case is a particular made up artificial situation upon which a program is exposed so as to
find errors. So a good test case is one that finds undiscovered errors. If testing is done properly, it
uncovers errors and after fixing those errors we have software that is being developed according
to specifications.
Test Information Flow
Testing is a complete process. For testing we need two types of inputs. First is software
configuration. It includes software requirement specification, design specifications and source
code of program. Second is test configuration. It is basically test plan and procedure.
Fig 9.1 Testing Process
Software configuration is required so that the testers know what is to be expected and tested
whereas test configuration is testing plan that is, the way how the testing will be conducted on
the system. It specifies the test cases and their expected value. It also specifies if any tools for
testing are to be used. Test cases are required to know what specific situations need to be tested.
When tests are evaluated, test results are compared with actual results and if there is some error,
Page | 94
Y9MC03031
then debugging is done to correct the error. Testing is a way to know about quality and
reliability. Error rate that is the occurrence of errors is evaluated. This data can be used to predict
the occurrence of errors in future.
Test Case design
We now know, test cases are integral part of testing. So we need to know more about test cases
and how these test cases are designed. The most desired or obvious expectation from a test case
is that it should be able to find most errors with the least amount of time and effort.
A software product can be tested in two ways. In the first approach only the overall functioning
of the product is tested. Inputs are given and outputs are checked. This approach is called black
box testing. It does not care about the internal functioning of the product.
The other approach is called white box testing. Here the internal functioning of the product is
tested. Each procedure is tested for its accuracy. It is more intensive than black box testing. But
for the overall product both these techniques are crucial. There should be sufficient number of
tests in both categories to test the overall product.
White Box Testing
White box testing focuses on the internal functioning of the product. For this different
procedures are tested. White box testing tests the following
Loops of the procedure
Decision points
Execution paths
For performing white box testing, basic path testing technique is used. We will illustrate how to
use this technique, in the following section.
Page | 95
Y9MC03031
Basis Path Testing
Basic path testing a white box testing technique .It was proposed by Tom McCabe. These tests
guarantee to execute every statement in the program at least one time during testing. Basic set is
the set of all the execution path of a procedure.
Flow graph Notation
Before basic path procedure is discussed, it is important to know the simple notation used for the
repres4enttation of control flow. This notation is known as flow graph. Flow graph depicts
control flow and uses the following constructs.
These individual constructs combine together to produce the flow graph for a particular
procedure
Page | 96
Y9MC03031
Basic terminology associated with the flow graph
Node: Each flow graph node represents one or more procedural statements. Each node that
contains a condition is called a predicate node.
Edge: Edge is the connection between two nodes. The edges between nodes represent flow of
control. An edge must terminate at a node, even if the node does not represent any useful
procedural statements.
Region: A region in a flow graph is an area bounded by edges and nodes. Cyclomatic
complexity: Independent path is an execution flow from the start point to the end point. Since a
procedure contains control statements, there are various execution paths depending upon
decision taken on the control statement. So Cyclomatic complexity provides the number of such
execution independent paths. Thus it provides a upper bound for number of tests that must be
produced because for each independent path, a test should be conducted to see if it is actually
reaching the end point of the procedure or not.
Black Box Testing
Black box testing test the overall functional requirements of product. Input are supplied to
product and outputs are verified. If the outputs obtained are same as the expected ones then the
product meets the functional requirements. In this approach internal procedures are not
considered. It is conducted at later stages of testing. Now we will look at black box testing
technique.
Black box testing uncovers following types of errors.
1. Incorrect or missing functions
2. Interface errors
3. External database access
4. Performance errors
Page | 97
Y9MC03031
5. Initialization and termination errors.
The following techniques are employed during black box testing
Equivalence Partitioning
In equivalence partitioning, a test case is designed so as to uncover a group or class of error. This
limits the number of test cases that might need to be developed otherwise.
Here input domain is divided into classes or group of data. These classes are known as
equivalence classes and the process of making equivalence classes is called equivalence
partitioning. Equivalence classes represent a set of valid or invalid states for input condition.
An input condition can be a range, a specific value, a set of values, or a boolean value. Then
depending upon type of input equivalence classes is defined. For defining equivalence classes the
following guidelines should be used.
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, then one valid and one invalid equivalence class are
defined.
For example, the range is say, 0 < count < Max1000. Then form a valid equivalence class with
that range of values and two invalid equivalence classes, one with values less than the lower
bound of range (i.e., count < 0) and other with values higher than the higher bound( count >
1000).
Page | 98
Y9MC03031
Boundary Value Analysis
It has been observed that programs that work correctly for a set of values in an equivalence class
fail on some special values. These values often lie on the boundary of the equivalence class. Test
cases that have values on the boundaries of equivalence classes are therefore likely to be error
producing so selecting such test cases for those boundaries is the aim of boundary value analysis.
In boundary value analysis, we choose input for a test case from an equivalence class, such that
the input lies at the edge of the equivalence classes. Boundary values for each equivalence class,
including the equivalence classes of the output, should be covered. Boundary value test cases are
also called “extreme cases”.
Hence, a boundary value test case is a set of input data that lies on the edge or boundary of a
class of input data or that generates output that lays at the boundary of a class of output data.
In case of ranges, for boundary value analysis it is useful to select boundary elements of the
range and an invalid value just beyond the two ends for the two invalid equivalence classes.
For example, if the range is 0.0 <= x <= 1.0, then the test cases are 0.0, 1.0 for valid inputs and –
0.1 and 1.1 for invalid inputs.
For boundary value analysis, the following guidelines should be used:
For input ranges bounded by a and b, test cases should include values a and b and just above and
just below a and b respectively.
1. If an input condition specifies a number of values, test cases should be developed to
exercise the minimum and maximum numbers and values just above and below these
limits.
2. If internal data structures have prescribed boundaries, a test case should be designed to
exercise the data structure at its boundary.
Page | 99
Y9MC03031
Now we know how the testing for software product is done. But testing software is not an easy
task since the size of software developed for the various systems is often too big. Testing needs a
specific systematic procedure, which should guide the tester in performing different tests at
correct time. This systematic procedure is testing strategies, which should be followed in order to
test the system developed thoroughly. Performing testing without some testing strategy would be
very cumbersome and difficult. Testing strategies are discussed the following pages of this
chapter.
Strategic Approach towards Software Testing
Developers are under great pressure to deliver more complex software on increasingly aggressive
schedules and with limited resources. Testers are expected to verify the quality of such software
in less time and with even fewer resources. In such an environment, solid, repeatable and
practical testing methods and automation are a must.
In a software development life cycle, bug can be injected at any stage. Earlier the bugs are
identified, more cost saving it has. There are different techniques for detecting and eliminating
bugs that originate in respective phase.
Software testing strategy integrates software test case design techniques into a well-planned
series of steps that result in the successful construction of software. Any test strategies
incorporate test planning, test case design, test execution, and the resultant data collection and
evaluation.
Testing is a set of activities. These activities so planned and conducted systematically that it
leaves no scope for rework or bugs.
Various software-testing strategies have been proposed so far. All provide a template for testing.
Things that are common and important in these strategies are
Testing begins at the module level and works “outward” : tests which are carried out, are done at
the module level where major functionality is tested and then it works toward the integration of
the entire system.
Page | 100
Y9MC03031
Different testing techniques are appropriate at different points in time: Under different
circumstances, different testing methodologies are to be used which will be the decisive factor
for software robustness and scalability. Circumstance essentially means the level at which the
testing is being done (Unit testing, system testing, Integration testing etc.) and the purpose of
testing.
The developer of the software conducts testing and if the project is big then there is a testing
team: All programmers should test and verify that their results are according to the specification
given to them while coding. In cases where programs are big enough or collective effort is
involved for coding, responsibilities for testing lies with the team as a whole.
Debugging and testing are altogether different processes. Testing aims to finds the errors
whereas debugging is the process of fixing those errors. But debugging should be incorporated in
testing strategies.
A software strategy must have low-level tests to test the source code and high-level tests that
validate system functions against customer requirements.
Unit Testing
We know that smallest unit of software design is a module. Unit testing is performed to check
the functionality of these units. it is done before these modules are integrated together to build
the overall system. Since the modules are small in size, individual programmers can do unit
testing on their respective modules. So unit testing is basically white box oriented. Procedural
design descriptions are used and control paths are tested to uncover errors within individual
modules. Unit testing can be done for more than one module at a time.
The following are the tests that are performed during the unit testing:
Module interface test: here it is checked if the information is properly flowing into the
program unit and properly coming out of it.
Local data structures: these are tested to see if the local data within unit(module) is stored
properly by them.
Page | 101
Y9MC03031
Boundary conditions: It is observed that much software often fails at boundary
conditions. That's why boundary conditions are tested to ensure that the program is
properly working at its boundary conditions.
Independent paths: All independent paths are tested to see that they are properly
executing their task and terminating at the end of the program.
Error handling paths: These are tested to check if errors are handled properly by them.
See fig. 9.4 for overview of unit testing
Fig 9.4 Unit Testing
Unit Testing Procedure
Fig 9.5 Unit Test Procedure
Unit testing begins after the source code is developed, reviewed and verified for the correct
syntax. Here design documents help in making test cases. Though each module performs a
specific task yet it is not a standalone program. It may need data from some other module or it
may need to send some data or control information to some other module. Since in unit testing
Page | 102
Y9MC03031
each module is tested individually, so the need to obtain data from other module or passing data
to other module is achieved by the use of stubs and drivers. Stubs and drivers are used to
simulate those modules. A driver is basically a program that accepts test case data and passes
that data to the module that is being tested. It also prints the relevant results. Similarly stubs are
also programs that are used to replace modules that are subordinate to the module to be tested. It
does minimal data manipulation, prints verification of entry, and returns. Fig. 9.5 illustrates this
unit test procedure.
Drivers and stubs are overhead because they are developed but are not a part of the product. This
overhead can be reduced if these are kept very simple.
Once the individual modules are tested then these modules are integrated to form the bigger
program structures. So next stage of testing deals with the errors that occur while integrating
modules. That's why next testing done is called integration testing, which is discussed next.
Integration Testing
Unit testing ensures that all modules have been tested and each of them works properly
individually. Unit testing does not guarantee if these modules will work fine if these are
integrated together as a whole system. It is observed that many errors crop up when the modules
are joined together. Integration testing uncovers errors that arises when modules are integrated to
build the overall system.
Following types of errors may arise:
Data can be lost across an interface. That is data coming out of a module is not going into
the desired module.
Sub-functions, when combined, may not produce the desired major function.
Individually acceptable imprecision may be magnified to unacceptable levels. For
example, in a module there is error-precision taken as +- 10 units. In other module same
error-precision is used. Now these modules are combined. Suppose the error precision
Page | 103
Y9MC03031
from both modules needs to be multiplied then the error precision would be +-100 which
would not be acceptable to the system.
Global data structures can present problems: For example, in a system there is a global
memory. Now these modules are combined. All are accessing the same global memory.
Because so many functions are accessing that memory, low memory problem can arise.
There are two approaches in integration testing. One is top down integration and the other is
bottom up integration. Now we'll discuss these approaches.
1. Top-Down Integration in Integration Testing
Top-down integration is an incremental approach to construction of program structure. In top
down integration, first control hierarchy is identified. That is which module is driving or
controlling which module. Main control module, modules sub-ordinate to and ultimately sub-
ordinate to the main control block are integrated to some bigger structure.
For integrating depth-first or breadth-first approach is used.
Fig. 9.6 Top down integration
Page | 104
Y9MC03031
In depth first approach all modules on a control path are integrated first. See fig. 9.6. Here
sequence of integration would be (M1, M2, M3), M4, M5, M6, M7, and M8. In breadth first all
modules directly subordinate at each level are integrated together.
Using breadth first for fig. 9.6 the sequence of integration would be (M1, M2, M8), (M3, M6),
M4, M7, andM5.
Another approach for integration is bottom up integration, which we discuss in the following
page.
2. Bottom-Up Integration in Integration Testing
Bottom-up integration testing starts at the atomic modules level. Atomic modules are the lowest
levels in the program structure. Since modules are integrated from the bottom up, processing
required for modules that are subordinate to a given level is always available, so stubs are not
required in this approach. Bottom-up integration implemented with the following steps:
1. Low-level modules are combined into clusters that perform a specific software sub
function. These clusters are sometimes called builds.
2. A driver (a control program for testing) is written to coordinate test case input and output.
3. The build is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Page | 105
Y9MC03031
Fig. 9.7 (a) Program Modules (b)Bottom-up integration applied to program modules in (a)
Fig 9.7 shows the how the bottom up integration is done. Whenever a new module is added to as
a part of integration testing, the program structure changes. There may be new data flow paths,
some new I/O or some new control logic. These changes may cause problems with functions in
the tested modules, which were working fine previously.
To detect these errors regression testing is done. Regression testing is the re-execution of some
subset of tests that have already been conducted to ensure that changes have not propagated
unintended side effects in the programs. Regression testing is the activity that helps to ensure that
changes (due to testing or for other reason) do not introduce undesirable behavior or additional
errors.
As integration testing proceeds, the number of regression tests can grow quite large. Therefore,
regression test suite should be designed to include only those tests that address one or more
classes of errors in each of the major program functions. It is impractical and inefficient to re-
execute every test for every program functions once a change has occurred.
Page | 106
Y9MC03031
Validation Testing
After the integration testing we have an assembled package that is free from modules and
interfacing errors. At this stage a final series of software tests, validation testing begin.
Validation succeeds when software functions in a manner that can be expected by the customer.
Major question here is what are expectations of customers. Expectations are defined in the
software requirement specification identified during the analysis of the system. The specification
contains a section titled “Validation Criteria” Information contained in that section forms the
basis for a validation testing.
Software validation is achieved through a series of black-box tests that demonstrate conformity
with requirements. There is a test plan that describes the classes of tests to be conducted, and a
test procedure defines specific test cases that will be used in an attempt to uncover errors in the
conformity with requirements.
After each validation test case has been conducted, one of two possible conditions exists:
The function or performance characteristics conform to specification and are accepted, or
A deviation from specification is uncovered and a deficiency list is created. Deviation or error
discovered at this stage in a project can rarely be corrected prior to scheduled completion. It is
often necessary to negotiate with the customer to establish a method for resolving deficiencies.
Alpha and Beta testing
For a software developer, it is difficult to foresee how the customer will really use a program.
Instructions for use may be misinterpreted; strange combination of data may be regularly used;
and the output that seemed clear to the tester may be unintelligible to a user in the field.
When custom software is built for one customer, a series of acceptance tests are conducted to
enable the customer to validate all requirements. Acceptance test is conducted by customer rather
than by developer. It can range from an informal “test drive” to a planned and systematically
Page | 107
Y9MC03031
executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or
months, thereby uncovering cumulative errors that might degrade the system over time.
If software is developed as a product to be used by many customers, it is impractical to perform
formal acceptance tests with each one. Most software product builders use a process called alpha
and beta testing to uncover errors that only the end user seems able to find.
Customer conducts the alpha testing at the developer’s site. The software is used in a natural
setting with the developer. The developer records errors and usage problem. Alpha tests are
conducted in a controlled environment.
The beta test is conducted at one or more customer sites by the end user(s) of the software. Here,
developer is not present. Therefore, the beta test is a live application of the software in an
environment that cannot be controlled by the developer. The customer records all problems that
are encountered during beta testing and reports these to the developer at regular intervals.
Because of problems reported during beta test, the software developer makes modifications and
then prepares for release of the software product to the entire customer base.
REPORTSPage | 108
Y9MC03031
Sender is sending the file from the sender module to receiver via server.
Page | 109
Y9MC03031
To transfer the file from sender to the receive module it takes 1.451 sec.
Server Module is received the file from the Sender Module.
Page | 110
Y9MC03031
After transfer the file from sender to the receive module it store in to its memory system.Page | 111
Y9MC03031
Page | 112
Y9MC03031
After transferring the file from sender to the receive module. A result module will appeared at receiver side.
Page | 113
Y9MC03031
This module give the Simulation for the file has been attacked by the virus or Attacker Malwares
This is the message box to inform to the sender or to receiver that the file data has corrupted.
To transfer the file from sender to the receive module after attacking the virus or attacker so that it takes 159.776 sec.
Page | 114
Y9MC03031
After transferring the file from sender to the receive module when virus attacked. A result module will appeared at receiver side.
Scanning the file data with PDM1,PDM2,PDM3,but these mechanism are not able to delete virus.
Page | 115
Y9MC03031
Scanning the file data with PDM4, this mechanism found the virus and able to delete virus.
To transfer the file from sender to the receive module after scanning the attacking virus file data so that it takes 0.281 sec.
Page | 116
Y9MC03031
After transferring the file from sender to the receive module after scanning the attacking virus file data. A result module will appeared at receiver side.
CONCLUSION
Page | 117
Y9MC03031
This paper utilizes game theory to propose a number of puzzle-based defenses against flooding
attacks. It is shown that the interactions between an attacker who launches a flooding attack and
a defender who counters the attack using a puzzle-based defense can be modeled as an infinitely
repeated game of discounted payoffs. Then, the solution concepts of this type of games are
deployed to find the solutions, i.e., the best strategy a rational defender can adopt in the face of a
rational attacker. In this way, the optimal puzzle-based defense strategies are developed. More
specifically, four defense mechanisms are proposed. PDM1 is derived from the open-loop
solution concept in which the defender chooses his actions regardless of what happened in the
game history. This mechanism is applicable in defeating the single-source and distributed
attacks, but it cannot support the higher payoffs being feasible in the game. PDM2 resolves this
by using the closed-loop solution concepts, but it can only defeat a single-source attack. PDM3
extends PDM2 and deals with distributed attacks. This defense is based on the assumption that
the defender knows the size of the attack coalition. Finally, in PDM4, the ultimate defense
mechanism is proposed in which the size of the attack coalition is assumed unknown.
The mechanisms proposed in this paper can also be integrated with reactive defenses to achieve
synergetic effects. A complete flooding attack solution is likely to require some kind of defense
during the attack traffic identification. The mechanisms of this paper can provide such defenses.
On the other hand, the estimations made by a reactive mechanism can be used in tuning the
mechanisms proposed in this paper.
BIBLIOGRAPHY OR REFERENCES:
Page | 118
Y9MC03031
[1] D. Moore, C. Shannon, D.J. Brown, G.M. Voelker, and S. Savage, “Inferring Internet Denial-
of-Service Activity,” ACM Trans. Computer Systems, vol. 24, no. 2, pp. 115-139, May 2006.
[2] A. Hussain, J. Heidemann, and C. Papadopoulos, “A Framework for Classifying Denial of
Service Attacks,” Proc. ACM SIGCOMM ’03, pp. 99-110, 2003.
[3] A.R. Sharafat and M.S. Fallah, “A Framework for the Analysis of Denial of Service
Attacks,” The Computer J., vol. 47, no. 2, pp. 179-192, Mar. 2004.
[4] C.L. Schuba, I.V. Krsul, M.G. Kuhn, E.H. Spafford, A. Sundaram, and D. Zamboni,
“Analysis of a Denial of Service Attack on TCP,” Proc. 18th IEEE Symp. Security and Privacy,
pp. 208-223, 1997.
[5] Smurf IP Denial-of-Service Attacks. CERT Coordination Center, Carnegie Mellon Univ.,
1998.
[6] Denial-of-Service Tools. CERT Coordination Center, Carnegie Mellon Univ., 1999.
[7] Denial-of-Service Attack via Ping. CERT Coordination Center, Carnegie Mellon Univ.,
1996.
[8] IP Denial-of-Service Attacks. CERT Coordination Center, Carnegie Mellon Univ., 1997.
[9] J. Mirkovic and P. Reiher, “A Taxonomy of DDoS Attacks and DDoS Defense
Mechanisms,” ACM SIGCOMM Computer Communication Rev., vol. 34, no. 2, pp. 39-53, Apr.
2004.
[10] J. Ioannidis and S. Bellovin, “Implementing Pushback: Router- Based Defense Against
DDoS Attacks,” Proc. Network and Distributed System Security Symp. (NDSS ’02), pp. 6-8,
2002.
[11] S. Savage, D. Wetherall, A. Karlin, and T. Anderson, “Practical Network Support for IP
Traceback,” Proc. ACM SIGCOMM ’00, pp. 295-306, 2000.
[12] D. Song and A. Perrig, “Advanced and Authenticated Marking Schemes for IP Traceback,”
Proc. IEEE INFOCOM ’01, pp. 878-886, 2001.
Page | 119