Date post: | 07-Nov-2014 |
Category: |
Documents |
Upload: | sowmya-deepika |
View: | 107 times |
Download: | 4 times |
Energy-Efficient Protocol for Cooperative Networks
ABSTRACT
In cooperative networks, transmitting and receiving nodes recruit neighboring nodes to
assist in communication. We model a cooperative transmission link in wireless networks as a
transmitter cluster and a receiver cluster. We then propose a cooperative communication protocol
for establishment of these clusters and for cooperative transmission of data. We derive the upper
bound of the capacity of the protocol, and we analyze the end-to-end robustness of the protocol
to data-packet loss, along with the tradeoff between energy consumption and error rate. The
analysis results are used to compare the energy savings and the end-to-end robustness of our
protocol with two non-cooperative schemes, as well as to another cooperative protocol published
in the technical literature. The comparison results show that, when nodes are positioned on a
grid, there is a reduction in the probability of packet delivery failure by two orders of magnitude
for the values of parameters considered. Up to 80% in energy savings can be achieved for a grid
topology, while for random node placement our cooperative protocol can save up to 40% in
energy consumption relative to the other protocols. The reduction in error rate and the energy
savings translate into increased lifetime of cooperative sensor networks.
LIST OF CONTENTS
Page N0
List of Figures 6
List of Tables 7
1. Introduction 8
1.1 Purpose 9
1.2 Scope 10
1.3 Motivation 11
1.3.1 Definitions 12
1.3.2 Abbreviations 12
1.3.3 Model Diagrams
13
1.4 Overview 15
2. Literature Survey 16
2.1 Introduction 17
2.2 History 21
2.3 Purpose 24
2.4 Requirements 26
2.5 Technology Used 29
3. System Analysis 41
3.1 Existing System 43
3.1.1 Drawbacks 45
3.2 Problem statement 45
3.3 Proposed System 46
3.3.1 Advantages 47
3.4 Feasibility Study 47
3.4.1 Economic Feasibility 49
3.4.2 Operational Feasibility 50
3.4.3 Technical Feasibility 51
3.5 Algorithm 53
3.6 Data Dictionary 53
4. System Requirements Specification 54
4.1 Introduction 55
4.2 Purpose 56
4.3 Functional Requirements 57
4.4 Non Functional Requirements 58
4.5 Hardware Requirements 59
4.6 Software Requirements 60
5. System Design 61
5.1 System Specifications 62
5.2 System Components 63
5.3 UML Diagrams 64
5.4DFD Diagrams 80
5.5 ER Diagram 82
6. Implementation 83
6.1 Sample Code 85
7. System Testing 96
7.1 Testing Methodologies 101
7.2 Test cases 102
7.3 Results and Discussions 110
8. Conclusion and Future Enhancements 111
8.1 Conclusion 111
8.2 Scope for Future Enhancement 111
9. References 112
LIST OF FIGURES
Figure No. Figure Name Page No
1
2
3 Fig V Model
3 class diagram
4 use case diagram
5 sequence diagram
6 Collaboration diagram
7 State chart Diagram
8 Activity Diagram
9 Deployment Diagram
10 Component Diagram
11 DFD Diagram
12 SOFTWARE TESTING LIFE CYCLE
11 Fig Login Form
12 Fig Login validation Form
14 Fig Registration Form
15 Fig Registration validation Form
16
17
18
LIST OF TABLES
Table No. Table Name Page No
1 Positive Test case
2 Negative Test Case
3
4
1 INTRODUCTION
In Wireless sensor networks, nodes have limited energy resources and,
consequently, protocols designed for sensor networks should be energy-efficient. One recent
technology that allows energy saving is cooperative transmission. In cooperative transmission,
multiple nodes simultaneously receive, decode, and retransmit data packets. In this paper, as
opposed to previous works, we use a cooperative communication model with multiple nodes on
both ends of a hop and with each data packet being transmitted only once per hop.
In our model of cooperative transmission, every node on the path from the source
node to the destination node becomes a cluster head, with the task of recruiting other nodes in its
neighborhood and coordinating their transmissions. Consequently, the classical route from a
source node to a sink node is replaced with a multi hop cooperative path, and the classical point-
to-point communication is replaced with many-to-many cooperative communication. The path
can then be described as “having a width,” where the “width” of a path at a particular hop is
determined by the number of nodes on each end of a hop. For the example in, the width of each
intermediate hop is 3. Of course, this “width” does not need to be uniform along a path. Each
hop on this path represents communication from many geographically close nodes, called a
sending cluster, to another cluster of nodes, termed a receiving cluster. The nodes in each cluster
cooperate in transmission of packets, which propagate along the path from one cluster to the
next. Our model of cooperative transmission for a single hop is further depicted in Fig. 2(a).
Every node in the receiving cluster receives from every node in the sending cluster. Sending
nodes are synchronized, and the power level of the received signal at a receiving node is the sum
of all the signal powers coming from all the sender nodes. This reduces the likelihood of a packet
being received in error. We assume that some mechanism for error detection is incorporated into
the packet format, so a node that does not receive a packet correctly will not transmit on the next
hop in the path. Our cooperative transmission protocol consists of two phases. In the routing
phase, the initial path between the source and the sink nodes is discovered as an underlying
“one-node-thick” path. Then, the path undergoes a thickening process in the “recruiting-and-
transmitting” phase. In this phase, the nodes on the initial path become cluster heads, which
recruit additional adjacent nodes from their neighborhood.
Due to the fact that the cluster heads recruit nodes from their immediate neighborhood,
the inter-clusters distances are significantly larger than the distances between nodes in the same
cluster [not shown in. Recruiting is done dynamically and per packet as the packet traverses the
path. When a packet is received by a cluster head of the receiving cluster, the cluster head
initiates the recruiting by the next node on the “one-node-thick” path. Once this recruiting is
completed and the receiving cluster is established, the packet is transmitted from the sending
cluster to the newly established receiving cluster.
1.1 Purpose:
The routing phase of the protocol, which is responsible for finding a “one-node-
thick” route from the source node to the sink node, could be implemented using one of the many
previously published routing protocols. For the purpose of performance evaluation, we chose to
implement this phase using the Ad hoc On-demand Distance-Vector routing protocol (AODV)
with some modifications and with the links’ transmissions energy used as the links’ cost.
1.3 Scope:
The main novelty of our paper the “recruiting-and-
transmitting” Phase is done dynamically per hop, starting from the source
node and progressing, hop by hop, as the packet moves along the path to
the sink node. Once a data packet is received at a receiving cluster of the
previous hop along the path, the receiving cluster now becomes the sending
cluster, and the new receiving cluster will start forming. The next node on
the “one-node-thick-path” becomes the cluster head of the receiving cluster.
The receiving cluster is formed by the cluster head recruiting neighbor nodes
through exchange of short control packets. Then, the sending cluster head
synchronizes its nodes, at which time the nodes transmit the data packet to
the nodes of the receiving cluster.
1.3 Motivation:
Our model of cooperative transmission for a single hop is further depicted in.
Every node in the receiving cluster receives from every node in the sending cluster. Sending
nodes are synchronized, and the power level of the received signal at a receiving node is the sum
of all the signal powers coming from all the sender nodes. This reduces the likelihood of a packet
being received in error. We assume that some mechanism for error detection is incorporated into
the packet format, so a node that does not receive a packet correctly will not transmit on the next
hop in the path.
1.3.1 Definitions
The CAN protocol and the one-path scheme, the
cooperative path is set as the middle row. In the disjoint-paths scheme,
paths are formed from nodes in the same row, except that the source and
the sink nodes are in the middle row. Data packets arrive at the source node
following the Poisson distribution with a rate that corresponds to the
maximum load that the network can carry.
1.3.3 Module Description:
1. Recruiting, Transmitting & receiving
2. Route Construction
3. Data transmission using CANs
4. Message details with Route paths
1. Recruiting, Transmitting & receiving:
Our cooperative transmission protocol consists of two phases. In the routing
phase, the initial path between the source and the sink nodes is discovered as an underlying
“one-node-thick” path. Then, the path undergoes a thickening process in the “recruiting-and-
transmitting” phase. In this phase, the nodes on the initial path become cluster heads, which
recruit additional adjacent nodes from their neighborhood.
Recruiting is done dynamically and per packet as the packet traverses the
path. When a packet is received by a cluster head of the receiving cluster, the cluster head
initiates the recruiting by the next node on the “one-node-thick” path. Once this recruiting is
completed and the receiving cluster is established, the packet is transmitted from the sending
cluster to the newly established receiving cluster.
2. Route Construction:
Upon receiving the CL packet from node 5, node 2 sends a confirm (CF) packet to the
nodes in its sending cluster (nodes 1 and 3) to synchronize their transmission of the data packet.
The CF packet contains the waiting-time-to-send and the transmission power level. The
transmission power level is the total transmission power (a protocol-selectable parameter)
divided by the number of the nodes in the sending cluster. In the case of our example, the value
of is divided by 3 (nodes 1–3 are cooperating in sending). After the waiting-time-to-send expires,
sending cluster nodes 1–3 send the data packet to the receiving cluster nodes
3. Data transmission using CANs
The capacity of the CAN protocol degrades with an increase in the transmission range, as a
transmission on one hop blocks a large number of nodes from transmitting other packets, and
hence the network can only carry lower load. In the CAN protocol, failure to receive a packet
results in a large reduction in the success probability on the next hop. The disjoint-paths scheme
has larger energy consumption, as demonstrated in, which shows the effect of the transmission
range on the total energy consumption. Here, we sum the energy consumption for all packets
transmitted (control and data packets). Our cooperative transmission protocol saves between 6%
and 20% of the energy consumption compared to the CAN protocol and between 10% and 40%
of the energy consumption compared to the disjoint-paths scheme. As the transmission range
increases, the contention increases and the noise power increases. This increases the energy
consumption. The elevated contention increases the retransmission of control and data packets,
which, in turn, increases the total energy consumption.
4. Message details with Route paths:
The work in proposes and evaluates the performance of a cross-layer
framework that uses virtual multiple- input–single-output (MISO) links for MANET and shares
some similarity with our paper. However, there are some major differences between the two
works. On the physical layer, the architecture of is based on “virtual MISO,” which is also
referred to in as “virtual antenna array.” As pointed out in that paper, “nodes simultaneously
transmit and/or jointly receive appropriately encoded signals.” This model is totally different
from our model, where we use MISO system with orthogonal transmissions. On the MAC layer,
relies on the knowledge of the neighbors to select the cooperating nodes. To achieve this,
assumes that the list of neighbors is obtained by the HELLO messages of the routing protocol
1.4 Overview:
We evaluated the performance of cooperative transmission, where nodes in a
sending cluster are synchronized to communicate a packet to nodes in a receiving cluster. In our
communication model, the power of the received signal at each node of the receiving cluster is a
sum of the powers of the transmitted independent signals of the nodes in the sending cluster. The
increased power of the received signal, is the traditional single node to single node
communication, leads to overall saving in network energy and to end-to-end robustness to data
loss.
2. LITERATURE SURVEY
2.1 INTRODUCTION
The problem of energy-efficient routing in wireless networks that support cooperative
transmission was formulated in. In, two energy-efficient approximation algorithms are presented
for finding a cooperative route in wireless networks. The two algorithms for finding one
cooperative route are designed such that each hop consists of multiple sender nodes to one
receiver node. One of the algorithms (CAN) is used throughout this paper for performance
comparison.
The works in focus on MAC layer design for networks with cooperative transmission. In,
when no acknowledgement is received from the destination after timeout, the cooperative nodes,
which correctly received the data, retransmit it. Only one cooperative node retransmits at any
time, and the other cooperative nodes flush their copy once they hear the retransmission. Hence,
this work focuses on reducing the transmission errors, without benefiting from the energy
savings of simultaneous transmissions. In, high-rate nodes help low-rate nodes by forwarding
their transmissions. The work describes how the helper nodes are discovered. Similarly to, only
one node can cooperate at a time, and simultaneous transmissions are not used, hence the energy
savings are not considered. Likewise, in only one node cooperates in forwarding the data.
The IEEE 802.11 protocol was extended in to support multiple antennas per node. The
works in use the model with only one helper node at each hop in addition to the sender and the
receiver. The model in utilizes multiple nodes to forward the data, but only one node can
transmit at any time. Several good tutorial papers on cooperative transmission have been
published. As most of the current works look at the cooperation from the transmitter side only,
our paper differs in that our communication model includes groups of cooperating nodes at both
sides of the transmission link with the purpose of reduction in energy consumption. Similar to
multiple-input–multiple-output (MIMO) communications, the main gain of cooperative
transmission comes from the fact that there is limited correlation between communications from
different transmitters. The increase in the degree of freedom of signal detection decreases the bit
error rate. Consequently, the gain of cooperation is similar in nature to what is achieved by
MIMO techniques. Of course, there are substantial differences in the environment and in the
operation between cooperative transmission and MIMO.
In the MIMO systems, each node is equipped with multiple antennas. Information is
transmitted from the sender node by multiple antennas and received by multiple antennas at the
receiver node. The close proximity of the antennas at the transmitting nodes and of the antennas
at the receiving nodes makes synchronization easier to implement. The ability of nodes to sense
the carrier and to measure the interference level can be used to decide on the number of antennas
that are employed for transmission. On the contrary, in cooperative transmission, the
synchronization of transmissions of the relatively dispersed cooperating nodes necessitates a
more elaborate protocol. A protocol is also required to identify the neighboring nodes as
potential cooperators and to make a selection of the cooperating nodes.
Moreover, due to the geographical dispersion of the cooperating nodes, the protocols in
cooperative networks need to be distributed in their operation. In, a MAC protocol for MIMO
systems is described, which is based on centralized cluster architecture. This protocol uses
clustering mechanisms like LEACH. Nodes in a cluster cooperate to forward the data to only the
next cluster head on the path to the sink. However, the centralized architecture leads to higher
energy usage for the cluster maintenance. In contrast, distributed mechanisms are more efficient
in the cluster maintenance operation and lack the single-point-of failure vulnerability. Thus, they
may be better suited for sensor or mobile networks. Finally, the significant cost increase in
MIMO to implement multiple antennas at each node would be most often considered impractical
in many wireless networks and, in particular, in sensor networks. The work in proposes and
evaluates the performance of a cross-layer framework that uses virtual multiple- input–single-
output (MISO) links for MANET and shares some similarity with our paper. However, there are
some major differences between the two works. On the physical layer, the architecture of is
based on “virtual MISO,” which is also referred to in as “virtual antenna array.” As pointed out
in that paper, “nodes simultaneously transmit and/or jointly receive appropriately encoded
signals.” This model is totally different from our model, where we use MISO system with
orthogonal transmissions. On the MAC layer, relies on the knowledge of the neighbors to select
the cooperating nodes. To achieve this, assumes that the list of neighbors is obtained by the
HELLO messages of the routing protocol. In our paper, we do not assume any knowledge of the
neighboring nodes. Rather, we design our own “recruiting” protocol. Furthermore, the selection
of the nodes to cooperate is done randomly, without regard to how useful these nodes could be in
improving the cooperative communication. In contrast, in our protocol, selection of cooperating
nodes is done based on an elaborate calculation of the costs of the connections. These costs are
evaluated not only between the source and the collaborating node, but also between the
collaborating nodes and the target nodes in the receiving cluster. Finally, our protocol avoids
transmission collisions by reserving the recruiting nodes and preventing them from transmitting
during the collaboration.
Power Efficiency of Cooperative Communication in Wireless Sensor
Networks
INTRODUCTION
Wireless communication systems have recently gained popularity as their benefits are
being acknowledged and engineering ingenuity continues to overcome their inherent. One
serious disadvantage of wireless sensor networks (WSNs) is that the sensors used in these
networks are often limited to the use of a single battery, and their success is highly dependent
upon power efficient protocols. Traditional layered protocols often prove to be inefficient for
WSNs, and determining whether more power efficient protocols can be developed is important to
the future advancement of these networks. One recently developed protocol, the cross-layer
module (XLM), has been shown to increase network efficiency and reliability in comparison to
traditional layered protocols. XLM melts together the physical, MAC, network and transport
layers. At the core of XLM is initiative determination, which is detailed in. Cooperative
communications is also an alternative to traditional protocols that can increase power efficiency
in WSNs by having several nodes simultaneously transmit a single message to the intended
destination. The result is an energy savings due to the wireless broadcast advantage (WBA). The
WBA stems from the fact that when a wireless node transmits a packet, all nodes within the
transmission radius are able to listen. Usually, nodes for which the packet is not intended ignore
the packet, but it is possible for nodes to accept all incoming packets. When a node can
communicate with a group of nodes by only transmitting once, instead of transmitting to each
receiving node individually, the node is using the WBA.
In this paper, a protocol named Coop XLM that integrates cooperative communication and
XLM is presented. Coop XLM retains the initiative determination of XLM and adds the ability
for multiple nodes to participate in a single transmission. Simulation results show that Coop
XLM provides energy savings when compared to XLM. However, Coop XLM only improves
unique good put when compared to XLM at lower duty cycles. At higher duty cycles, XLM has a
higher unique good put than Coop XLM.
The Open Systems Interconnection (OSI) layers have been used extensively in wired
networks to provide portability and modularity. Researchers and designers are able to focus on
optimizing a certain layer while treating the remaining layers as black boxes. OSI layers work
well in wired networks because these networks have limited interactions between layers.
However, there are many inter-layer effects in a wireless network. Thus, although the OSI layers
have led to simplicity of system integration for wired networks, they can lead to suboptimal
system performance in wireless ones. Take for example the case of packet loss; the transport
layer will attribute the problem to congestion in the neighboring network layer even though the
problem may be caused by burst interferences at the MAC layer. This misdiagnosis due to
abstraction will cause lower throughput. Empirical studies have shown that wireless channel
characteristics, whose impact would traditionally be confined to the physical layer, actually
affect all layers in terms of performance. In addition, the medium access control (MAC) and
routing layers significantly influence each other due to interference. The physical and transport
layers are coupled due to the broadcast nature of wireless communication. As the power level of
each node is increased, the probability of collisions between packets increases. In order to
address inter-layer effects, a protocol that combines several layers needs to be developed.
Some previous works have combined a few layers, such as the MAC and routing layers, or else
they have combined all layers but not implemented the design. In order to jointly optimize
several layers and remove negative inter-layer effects, a unified cross-layer protocol is required.
Cross-Layer Module (XLM) is such a protocol. XLM is a cross-layer protocol designed
specifically for efficiency and reliability in WSN communication. The main concepts of XLM
include receiver-based contention, initiative determination, initiative-based forwarding, local
congestion control and distributed duty cycle operation. XLM melts the transport, network,
MAC and physical layers together. Unlike proactive routing protocols, XLM does not determine
the route from the source to the sink before the need arises. Instead, XLM waits until the source
has data to send to the sink; therefore, it is a reactive protocol. Although the sensor networks
considered in this paper are stationary, some of them have nodes with duty cycles other than
100%. Sleeping nodes are equivalent to dead nodes for a particular communication.
Each live node has two duties: source duty and router duty. The source duty is only necessary
when an event occurs in the node’s transmission radius. In this case, the node is responsible for
generating and transmitting the packet towards the sink. The router duty is the node’s duty to
receive and forward packets that other nodes have generated. A basic assumption is that all nodes
know their own location and that of the sink.
In order to explain XLM, a walk-through of a single communication is presented next.
When a source node wants to send data to the sink, it listens to the medium to check if other
signals are being broadcast. If the medium is busy, source node performs a contention window
size back off. Once the back off timer expires, source node broadcasts a request to send (RTS) to
all nodes within its transmission radius. The transmission radius is denoted in by the dashed
circle. The sink may or may not be within the transmission radius. If it is, then the sink becomes
the chosen next hop; otherwise, receiver-based contention
Wireless communication systems have a multitude of potential applications; however, one
serious disadvantage of wireless sensor networks is that the sensors used in these networks are
often limited to the use of a single battery, and their success is highly dependent upon power
efficient protocols. Traditional layered protocols often prove to be inefficient for WSNs. One
recently developed protocol, the cross-layer module (XLM), is a unified protocol that is designed
specifically for WSNs. XLM has been shown to increase network efficiency and reliability in
comparison to traditional layered protocols. The core of XLM, initiative determination,
maintains a balance between received signal to- noise ratio (SNR), local congestion and
remaining energy to increase reliability and network lifetime. Cross-layer design can help
achieve better energy performance in WSNs; however, it is important to strive for the advantages
of flexibility, modularity, simplicity and scalability found in traditional layered protocols.
Cooperative communications is also an alternative to traditional protocols that can increase
efficiency in WSNs by having several nodes simultaneously transmit a single message to the
intended destination. In this paper, a new protocol termed Coop XLM, which integrates
cooperative communication with XLM, is created and examined via simulation. Simulation
results indicate that energy is saved with Coop XLM in comparison to XLM and the Coop XLM
yields higher good put at lower duty cycles but yields lower good put at higher duty cycles. Coop
XLM is a protocol that should be used when power savings is of paramount importance, such as
in the case of WSNs. In future studies, it would also be interesting to use cooperative
communication for increased hop length. Instead of varying transmission power, the cooperative
nodes would instead transmit simultaneously at full power. The transmitted signal would be able
to go farther for the same received SNR. This scheme would decrease the number of hops
required for a packet to reach the sink; thus, the power used to propagate a packet through the
entire network would decrease. This scheme also has its challenges. For example, using the
XLM protocol, how would the cooperative nodes be able to select a common destination node?
There is no guarantee that the nodes that heard the cooperative RTS would be able to send a CTS
with an SNR that would be received by all the cooperative nodes.
Cooperative Communication for Wireless Sensors Network : A Mac Protocol solution
The motivation of this paper is to fill the gap between cooperative
communications technique developed for physical layer and an appropriate MAC layer scheme
for Wireless Sensor Networks (WSN). The cooperative radio techniques, also known as Virtual
Multiple Input Single Output (VMISO, originate from works using diversity techniques with
collocated multiple antennas. Instead of using multiple antennas to take advantage of the
diversity, the cooperative communication uses multiple nodes equipped with a single antenna
using distributed coding scheme to achieve similar gains. The cooperative communication
scheme is also inherently a network solution, and there are issues at multiple levels of the
network stack to solve in order to reach the gain offered by the diversity. This paper mainly
focus on finding the good tradeoff between the issues at the Mac layer and the physical layer
performances. We developed a Mac layer scheme that used a distributed algorithm to select a
relay node in an efficient way without extra overhead in signaling and processing. The reliability
of the communications that use cooperation helps us to design an acknowledgment agnostic
solution. The paper is organized as follows. In section 2, we present the related work.
We introduce the design of our solution and the specific mechanisms we developed. In section 4,
we present performance results and an analysis. Finally we conclude the paper and present some
future works.
RELATED WORK
Recently, a new class of radio diversity techniques called cooperative communication derived
from diversity techniques using co-located antennas has received lot of interests. Laneman et al.
and others have developed a set of cooperative communication scheme for distributed wireless
network like: ad hoc networks or sensors networks. Theirs respective works has paved the way
for a lot of studies using cooperative transmission on a real Mac layer framework. Ji et al.and Lin
et al. proposed different framework for Cooperative MAC protocol. These solutions are based
on network-assisted diversity multiple access (NDMA). These Authors present a novel
throughput-efficient medium access scheme for WSN. This scheme enables a node to retrieve a
packet from many previously received packets (MPR). In [Liu et al. proposed the first
cooperative MAC protocol called “CoopMac” based on the well knows IEEE 802.11 protocol.
They defined two alternative solutions CoopMAC I and CoopMAC II. In CoopMAC I, a new
frame HTS (Helper ready To Send) is added to the IEEE 802.11, to inform others that an
alternative node (a relay node) will help the sender to transmit more efficiently. Then, in
CoopMAC II, HTS frame is not used; instead they used the RTS header to advise which node
should act as a relay node. Chou et al. present a solution to perform cooperative communication
in distributed wireless networks. Authors claim that only one relay must participate in the
cooperative transmission. In order to select the relay node among its neighbors, they developed
mechanisms such as a busy tone and a special RTS (Relay-RTS). This RRTS is used with the
classic RTS/CTS mechanism to inform the source and the relay node chosen.
Most of these solutions used extra messages in order to setup the cooperative process and select
the relay node. In a WSN context, where the resources are limited, the use of these signaling
packets should be avoided to reduce the power consumption. a new cooperative MAC protocol
tailored for WSN is proposed. In order to fulfill the set of constraints imposed by the cooperative
communication scheme and the wireless sensors scheme we have developed an algorithm
allowing the automatic selection of the forwarder node (relay node) using only few message
exchanges during the network setup phase. To optimize the selection of the relay node algorithm
we are using a cross-layer design to fetch information from the physical layer. Our simulations
show that the proposed solutions brought enhancements (packet delivery ratio) and reliability to
the network in the case of a low dense network. Nevertheless in case of a massively dense
network, the use of cooperation techniques do not bring any enhancements and even will have a
negative impact on the performances. This is due to the fact that most of wireless links in the
network are good enough to carry traffic with very few loss provoked by interferences coming
from others transmitting node. This issue is not balancing the overload of a cooperative
communication. This concern lead us toward fact that any Mac layer scheme that are exploiting
cooperative communication should be used in adaptive way, in order to be efficient in any case.
In our future works we will focus on optimizing the group identifier decision process with the
aim of finding an even more efficient selection of a relay nodes also well suited for sensor
networks.
Cooperative Routing in Static Wireless Networks
WE study the problem of routing, cooperation, and energy efficiency in static wireless ad hoc
networks. In these networks, the nodes often spend most of their energy on communication. In
many applications, the nodes are small and have limited and non replenishable energy supplies.
For this reason, energy conservation is critical for extending the lifetime of these networks, and it
is not surprising that the problem of energy efficiency and energy-efficient communication in ad
hoc networks has received a lot of attention in the past several years. This problem, however, can
be approached from two different angles: energy-efficient route selection algorithms at the
network layer or efficient communication schemes at the physical layer. While each of these two
areas has received a lot of attention separately, not much work has been done on jointly
addressing these two problems. Our analysis in this paper tackles this less studied area.
The amount of energy required to establish a link between two nodes is usually assumed to be
proportional to the distance between the nodes raised to a constant power. This fixed exponent,
referred to as the path-loss exponent, is usually assumed to be between 2 to 4. Due to this
relationship, it is beneficial, in terms of energy saving, to relay the information through a
multihop route. Multihop routing extends the coverage by allowing a node to communicate with
nodes that would have otherwise been outside of its transmission range. The problem of finding a
minimum energy route becomes more interesting once some special properties of the wireless
medium are taken into account. In particular, in this work, we exploit the wireless broadcast
property and the benefit of transmission side diversity to achieve energy savings.
When omnidirectional antennas are used for communication, the signal transmitted by a node is
received by all nodes within a certain radius. For example, in Fig. 1, the signal transmitted by s is
received by both nodes 1 and 2. This property, usually referred to as the wireless broadcast
advantage (WBA), was first studied in a network context in . Clearly, this property of the
wireless physical medium significantly changes many network layer route selection algorithms.
The problem of finding the minimum energy multicast and broadcast tree in a wireless
network is studied in and . This problem is shown to be NP-Complete in and . Maric and Yates
look at the problem of efficient broadcasting when signal energy accumulation over multiple
transmissions is possible. WBA also adds substantial complexity to route selection algorithms
even in non broad cast scenarios. For example, this model is used in in the context of selecting
the minimum energy link and node disjoint paths in a wireless network. Another interesting
property of the wireless medium is the benefit of space diversity at the physical layer. This type
of diversity is achieved by employing multiple antennas on the transmitter or the receiver side. It
is well known that transmission and receiver space diversity can result in lower error probability
or higher transmission capacity. An overview of different transmission diversity techniques is
given in. In our paper, we assume that each node is only equipped with a single antenna.
However, we allow for the possibility that several nodes can cooperate with each other in
transmitting the information to other nodes, and through this cooperation effectively achieve
similar energy savings as a multiple antenna system. Architecture for achieving the required
level of coordination among the cooperating nodes is discussed in. We shall refer to the energy
savings due to cooperative transmission by several nodes as the wireless cooperation advantage
(WCA). Our aim in this paper is to take advantage of the wireless broadcast property and the
transmission side diversity created through cooperation to reduce the end-to-end energy
consumption in routing the information between two nodes. To make it clear, consider a simple
example. For the network shown in Fig. 2, assume that the minimum energy route from s to d is
determined to be as shown using the shaded line. As discussed previously, the information
transmitted by node s is received by nodes 1 and 2. After the first transmission, nodes s, 1, and 2
have the information and can cooperate in getting the information to 3, as shown in Fig. 2. Our
goal is to quantify the energy savings that can be achieved through cooperation and to find the
optimal cooperative route in order to maximize energy savings. We do not consider other issues
such as the level of coordination among the cooperating nodes required, and simply assume that
the required coordination can take place by employing appropriate hardware architecture at each
node and using a low-bandwidth control channel. We assume that this coordination consumes an
amount of energy that is negligible in comparison to the energy required for relaying the actual
data. Moreover, we realize that the use of cooperative relaying may require additional
transmissions and, hence, be inefficient in terms of bandwidth utilization. However, again, our
goal in this paper is merely to quantify the energy savings achievable through cooperation, and
hence, we do not concern ourselves with the issues of bandwidth consumption.
Potentially, the mechanisms proposed in this paper are applicable to situations where bandwidth
is plentiful but energy is scarce (e.g, communications in space). Of course, many of our
assumptions are idealized, but allow for analytical tractability of the problem at hand.
2.5 Technology Used:
Initially the language was called as “oak” but it was renamed as “Java” in 1995. The primary
motivation of this language was the need for a platform-independent (i.e., architecture neutral)
language that could be used to create software to be embedded in various consumer electronic
devices.
Java is a programmer’s language.
Java is cohesive and consistent.
Except for those constraints imposed by the Internet environment, Java gives the
programmer, full control.
Finally, Java is to Internet programming where C was to system programming.
Importance of Java to the Internet
Java has had a profound effect on the Internet. This is because; Java expands the Universe of
objects that can move about freely in Cyberspace. In a network, two categories of objects are
transmitted between the Server and the Personal computer. They are: Passive information and
Dynamic active programs. The Dynamic, Self-executing programs cause serious problems in the
areas of Security and probability. But, Java addresses those concerns and by doing so, has
opened the door to an exciting new form of program called the Applet.
Java can be used to create two types of programs
Applications and Applets : An application is a program that runs on our Computer under the
operating system of that computer. It is more or less like one creating using C or C++. Java’s
ability to create Applets makes it important. An Applet is an application designed to be
transmitted over the Internet and executed by a Java –compatible web browser. An applet is
actually a tiny Java program, dynamically downloaded across the network, just like an image.
But the difference is, it is an intelligent program, not just a media file. It can react to the user
input and dynamically change.
Features of Java Security
Every time you that you download a “normal” program, you are risking a viral infection. Prior to
Java, most users did not download executable programs frequently, and those who did scan them
for viruses prior to execution. Most users still worried about the possibility of infecting their
systems with a virus. In addition, another type of malicious program exists that must be guarded
against. This type of program can gather private information, such as credit card numbers, bank
account balances, and passwords. Java answers both these concerns by providing a “firewall”
between a network application and your computer.
When you use a Java-compatible Web browser, you can safely download Java applets without
fear of virus infection or malicious intent.
Portability
For programs to be dynamically downloaded to all the various types of platforms connected to
the Internet, some means of generating portable executable code is needed .As you will see, the
same mechanism that helps ensure security also helps create portability. Indeed, Java’s solution
to these two problems is both elegant and efficient.
The Byte code
The key that allows the Java to solve the security and portability problems is that the output of
Java compiler is Byte code. Byte code is a highly optimized set of instructions designed to be
executed by the Java run-time system, which is called the Java Virtual Machine (JVM). That is,
in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a program in a wide
variety of environments. The reason is, once the run-time package exists for a given system, any
Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java that
prevents on-the-fly compilation of byte code into native code. Sun has just completed its Just In
Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it compiles byte
code into executable code in real time, on a piece-by-piece, demand basis. It is not possible to
compile an entire Java program into executable code all at once, because Java performs various
run-time checks that can be done only at run time. The JIT compiles code, as it is needed, during
execution.
Java Virtual Machine (JVM)
Beyond the language, there is the Java virtual machine. The Java virtual machine is an important
element of the Java technology. The virtual machine can be embedded within a web browser or
an operating system. Once a piece of Java code is loaded onto a machine, it is verified. As part of
the loading process, a class loader is invoked and does byte code verification makes sure that the
code that’s has been generated by the compiler will not corrupt the machine that it’s loaded on.
Byte code verification takes place at the end of the compilation process to make sure that is all
accurate and correct. So byte code verification is integral to the compiling and executing of Java
code.
Swing is a platform-independent, Model-View-Controller GUI framework for Java. It follows a
single thread program ming model, and possesses the following traits:
Extensible
Swing is a highly partitioned architecture, which allows for the "plugging" of various custom
implementations of specified framework interfaces: Users can provide their own custom
implementation(s) of these components to override the default implementations. In general,
Swing users can extend the framework by extending existing (framework) classes and/or
providing alternative implementations of core components.
Swing is a component-based framework. The distinction between objects and components is a
fairly subtle point: concisely, a component is a well-behaved object with a known/specified
characteristic pattern of behaviour. Swing objects asynchronously fire events, have "bound"
properties, and respond to a well-known set of commands (specific to the component.)
Specifically, Swing components are Java Beans components, compliant with the Java Beans
Component Architecture specifications.
Customizable
Given the programmatic rendering model of the Swing framework, fine control over the details
of rendering of a component is possible in Swing. As a general pattern, the visual representation
of a Swing component is a composition of a standard set of elements, such as a "border", "inset",
decorations, etc. Typically, users will programmatically customize a standard Swing component
(such as a JTable) by assigning specific Borders, Colors, Backgrounds, opacities, etc., as the
properties of that component. The core component will then use these properties (settings) to
determine the appropriate renderers to use in painting its various aspects. However, it is also
completely possible to create unique GUI controls with highly customized visual representation.
Configurable
Swing's heavy reliance on runtime mechanisms and indirect composition patterns allows it to
respond at runtime to fundamental changes in its settings. For example, a Swing-based
application can change its look and feel at runtime. Further, users can provide their own look and
feel implementation, which allows for uniform changes in the look and feel of existing Swing
applications without any programmatic change to the application code.
Lightweight UI
Swing's configurability is a result of a choice not to use the native host OS's GUI controls for
displaying itself. Swing "paints" its controls programmatically through the use of Java 2D APIs,
rather than calling into a native user interface toolkit. Thus, a Swing component does not have a
corresponding native OS GUI component, and is free to render itself in any way that is possible
with the underlying graphics APIs.
However, at its core every Swing component relies on an AWT container, since (Swing's)
JComponent extends (AWT's) Container. This allows Swing to plug into the host OS's GUI
management framework, including the crucial device/screen mappings and user interactions,
such as key presses or mouse movements. Swing simply "transposes" its own (OS agnostic)
semantics over the underlying (OS specific) components. So, for example, every Swing
component paints its rendition on the graphic device in response to a call to component.paint(),
which is defined in (AWT) Container. But unlike AWT components, which delegated the
painting to their OS-native "heavyweight" widget, Swing components are responsible for their
own rendering.
This transposition and decoupling is not merely visual, and extends to Swing's management and
application of its own OS-independent semantics for events fired within its component
containment hierarchies. Generally speaking, the Swing Architecture delegates the task of
mapping the various flavors of OS GUI semantics onto a simple, but generalized, pattern to the
AWT container. Building on that generalized platform, it establishes its own rich and complex
GUI semantics in the form of the JComponent model.
Loosely-Coupled and MVC
The Swing library makes heavy use of the Model/View/Controller software design pattern [1] ,
which conceptually decouples the data being viewed from the user interface controls through
which it is viewed. Because of this, most Swing components have associated models (which are
specified in terms of Java interfaces), and the programmer can use various default
implementations or provide their own. The framework provides default implementations of
model interfaces for all of its concrete components. The typical use of the Swing framework does
not require the creation of custom models, as the framework provides a set of default
implementations that are transparently, by default, associated with the corresponding
JComponent child class in the Swing library. In general, only complex components, such as
tables, trees and sometimes lists, may require the custom model implementations around the
application-specific data structures. To get a good sense of the potential that the Swing
architecture makes possible, consider the hypothetical situation where custom models for tables
and lists are wrappers over DAO and/or EJB services..
Typically, Swing component model objects are responsible for providing a concise interface
defining events fired, and accessible properties for the (conceptual) data model for use by the
associated JComponent. Given that the overall MVC pattern is a loosely-coupled collaborative
object relationship pattern, the model provides the programmatic means for attaching event
listeners to the data model object. Typically, these events are model centric (ex: a "row inserted"
event in a table model) and are mapped by the JComponent specialization into a meaningful
event for the GUI component.
For example, the JTable has a model called TableModel that describes an interface for how a
table would access tabular data. A default implementation of this operates on a two-dimensional
array.
The view component of a Swing JComponent is the object used to graphically "represent" the
conceptual GUI control. A distinction of Swing, as a GUI framework, is in its reliance on
programmatically-rendered GUI controls (as opposed to the use of the native host OS's GUI
controls). Prior to Java 6 Update 10, this distinction was a source of complications when mixing
AWT controls, which use native controls, with Swing controls in a GUI (see Mixing AWT and
Swing components).
Finally, in terms of visual composition and management, Swing favors relative layouts (which
specify the positional relationships between components) as opposed to absolute layouts (which
specify the exact location and size of components). This bias towards "fluid"' visual ordering is
due to its origins in the applet operating environment that framed the design and development of
the original Java GUI toolkit. (Conceptually, this view of the layout management is quite similar
to that which informs the rendering of HTML content in browsers, and addresses the same set of
concerns that motivated the former.)
RELATIONSHIP OF AWT:
Since early versions of Java, a portion of the Abstract Window Toolkit (AWT) has provided
platform-independent APIs for user interface components. In AWT, each component is rendered
and controlled by a native peer component specific to the underlying windowing system.
By contrast, Swing components are often described as lightweight because they do not require
allocation of native resources in the operating system's windowing toolkit. The AWT
components are referred to as heavyweight components.
Much of the Swing API is generally a complementary extension of the AWT rather than a direct
replacement. In fact, every Swing lightweight interface ultimately exists within an AWT
heavyweight component because all of the top-level components in Swing (JApplet, JDialog,
JFrame, and JWindow) extend an AWT top-level container. Prior to Java 6 Update 10, the use of
both lightweight and heavyweight components within the same window was generally
discouraged due to Z-order incompatibilities. However, later version of Java have fixed these
issues, and both Swing and AWT components can now be used in one GUI without Z-order
issues.
The core rendering functionality used by Swing to draw its lightweight components is provided
by Java 2D, another part of JFC.
Relationship to SWT
The Standard Widget Toolkit (SWT) is a competing toolkit originally developed by IBM and
now maintained by the Eclipse community. SWT's implementation has more in common with
the heavyweight components of AWT. This confers benefits such as more accurate fidelity with
the underlying native windowing toolkit, at the cost of an increased exposure to the native
platform in the programming model.
The advent of SWT has given rise to a great deal of division among Java desktop developers,
with many strongly favoring either SWT or Swing. Sun's development on Swing continues to
focus on platform look and feel (PLAF) fidelity with each platform's windowing toolkit in the
approaching Java SE 7 release (as of December 2006).
There has been significant debate and speculation about the performance of SWT versus Swing;
some hinted that SWT's heavy dependence on JNI would make it slower when the GUI
component and Java need to communicate data, but faster at rendering when the data model has
been loaded into the GUI, but this has not been confirmed either way. [2] A fairly thorough set of
benchmarks in 2005 concluded that neither Swing nor SWT clearly outperformed the other in the
general case.
SWT serves the Windows platform very well but is considered by some to be less effective as a
technology for cross-platform development. By using the high-level features of each native
windowing toolkit, SWT returns to the issues seen in the mid 1990s (with toolkits like zApp,
Zinc, XVT and IBM/Smalltalk) where toolkits attempted to mask differences in focus behaviour,
event triggering and graphical layout. Failure to match behavior on each platform can cause
subtle but difficult-to-resolve bugs that impact user interaction and the appearance of the GUI.
Relation between AWT and Swing
3. SYSTEM ANALYSIS
Introduction:
The Systems Development Life Cycle (SDLC), or Software Development Life
Cycle in systems engineering, information systems and software engineering, is the process of
creating or altering systems, and the models and methodologies that people use to develop these
systems.
In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling the
creation of an information system the software development process.
SOFTWARE MODEL OR ARCHITECTURE ANALYSIS:
Structured project management techniques (such as an SDLC) enhance
management’s control over projects by dividing complex tasks into manageable sections. A
software life cycle model is either a descriptive or prescriptive characterization of how software
is or should be developed. But none of the SDLC models discuss the key issues like Change
management, Incident management and Release management processes within the SDLC
process, but, it is addressed in the overall project management. In the proposed hypothetical
model, the concept of user-developer interaction in the conventional SDLC model has been
converted into a three dimensional model which comprises of the user, owner and the developer.
In the proposed hypothetical model, the concept of user-developer interaction in the conventional
SDLC model has been converted into a three dimensional model which comprises of the user,
owner and the developer. The ―one size fits all‖ approach to applying SDLC methodologies is
no longer appropriate. We have made an attempt to address the above mentioned defects by
using a new hypothetical model for SDLC described elsewhere. The drawback of addressing
these management processes under the overall project management is missing of key technical
issues pertaining to software development process that is, these issues are talked in the project
management at the surface level but not at the ground level.
WHAT IS SDLC?
A software cycle deals with various parts and phases from planning to testing and deploying
software. All these activities are carried out in different ways, as per the needs. Each way is
known as a Software Development Lifecycle Model(SDLC)[2].A software life cycle model is
either a descriptive or prescriptive characterization of how software is or should be developed. A
descriptive model describes the history of how a particular software system was developed.
Descriptive models may be used as the basis for understanding and improving software
development processes or for building empirically grounded prescriptive models.
SDLC models * The Linear model (Waterfall) - Separate and distinct phases of specification
and development. - All activities in linear fashion. - Next phase starts only when first one is
complete. * Evolutionary development - Specification and development are interleaved (Spiral,
incremental, prototype based, Rapid Application development). - Incremental Model (Waterfall
in iteration), - RAD(Rapid Application Development) - Focus is on developing quality product in
less time, - Spiral Model - We start from smaller module and keeps on building it like a spiral. It
is also called Component based development. * Formal systems development - A mathematical
system model is formally transformed to an implementation. * Agile Methods. - Inducing
flexibility into development. * Reuse-based development - The system is assembled from
existing components.
The General Model
Software life cycle models describe phases of the software cycle and the order in which those
phases are executed. There are tons of models, and many companies adopt their own, but all
have very similar patterns. The general, basic model is shown below:
General Life Cycle Model
Each phase produces deliverables required by the next phase in the life cycle. Requirements are
translated into design. Code is produced during implementation that is driven by the design.
Testing verifies the deliverable of the implementation phase against requirements
Spiral Model
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral Model of
Software Development and Enhancement. This model was not the first model to discuss
iterative development, but it was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase
starts with a design goal and ends with a client reviewing the progress thus far. Analysis and
engineering efforts are applied at each phase of the project, with an eye toward the end goal of
the project.
The steps for Spiral Model can be generalized as follows:
The new system requirements are defined in as much details as possible. This usually
involves interviewing a number of users representing all the external or internal users
and other aspects of the existing system.
A preliminary design is created for the new system.
A first prototype of the new system is constructed from the preliminary design. This
is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
A second prototype is evolved by a fourfold procedure:
1. Evaluating the first prototype in terms of its strengths, weakness, and risks.
2. Defining the requirements of the second prototype.
3. Planning an designing the second prototype.
4. Constructing and testing the second prototype.
At the customer option, the entire project can be aborted if the risk is deemed too
great. Risk factors might involved development cost overruns, operating-cost
miscalculation, or any other factor that could, in the customer’s judgment, result in a
less-than-satisfactory final product.
The existing prototype is evaluated in the same manner as was the previous prototype,
and if necessary, another prototype is developed from it according to the fourfold
procedure outlined above.
The preceding steps are iterated until the customer is satisfied that the refined
prototype represents the final product desired.
The final system is constructed, based on the refined prototype.
The final system is thoroughly evaluated and tested. Routine maintenance is carried
on a continuing basis to prevent large scale failures and to minimize down time.
Spiral Life Cycle Model .
The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and
Evaluation. A software project repeatedly passes through these phases in iterations
(called Spirals in this model). The baseline spiral, starting in the planning phase, requirements
are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is
undertaken to identify risk and alternate solutions. A prototype is produced at the end of the
risk analysis phase. Software is produced in the engineering phase, along with testing at
the end of the phase. The evaluation phase allows the customer to evaluate the output of the
project to date before the project continues to the next spiral. In the spiral model, the angular
component represents progress, and the radius of the spiral represents cost.
Advantages
High amount of risk analysis
Good for large and mission-critical projects.
Software is produced early in the software life cycle.
3.1 Existing System
Two energy-efficient approximation algorithms are presented for finding a cooperative
route in wireless networks. The two algorithms for finding one cooperative route are designed
such that each hop consists of multiple sender nodes to one receiver node. Existing methods
focus on MAC layer design for networks with cooperative transmission. When no
acknowledgement is received from the destination after timeout, the cooperative nodes, which
correctly received the data, retransmit it. Only one cooperative node retransmits at any time, and
the other cooperative nodes flush their copy once they hear the retransmission. Hence, this work
focuses on reducing the transmission errors, without benefiting from the energy savings of
simultaneous transmissions.
In the multiple-input–multiple-output (MIMO) systems, each node is equipped with
multiple antennas. Information is transmitted from the sender node by multiple antennas and
received by multiple antennas at the receiver node. The close proximity of the antennas at the
transmitting nodes and of the antennas at the receiving nodes makes synchronization easier to
implement. The ability of nodes to sense the carrier and to measure the interference level can be
used to decide on the number of antennas that are employed for transmission.
3.1.2 Problem statement
The problem of energy-efficient routing in wireless networks that support
cooperative transmission was formulated in. In, two energy-efficient approximation algorithms
are presented
for finding a cooperative route in wireless networks. The two algorithms for finding one
cooperative route are designed such that each hop consists of multiple sender nodes to one
receiver node. One of the algorithms (CAN) is used throughout this paper for performance
comparison.
3.2 Proposed System
In this paper we propose a cooperative communication model with multiple nodes on
both ends of a hop and with each data packet being transmitted only once per hop. In our model
of cooperative transmission, every node on the path from the source node to the destination node
becomes a cluster head, with the task of recruiting other nodes in its neighborhood and
coordinating their transmissions. Consequently, the classical route from a source node to a sink
node is replaced with a multihop cooperative path, and the classical point-to-point
communication is replaced with many-to-many cooperative communication. The path can then
be described as “having a width,” where the “width” of a path at a particular hop is determined
by the number of nodes on each end of a hop.
Every node in the receiving cluster receives from every node in the sending cluster.
Sending nodes are synchronized, and the power level of the received signal at a receiving node is
the sum of all the signal powers coming from all the sender nodes. This reduces the likelihood of
a packet being received in error. We assume that some mechanism for error detection is
incorporated into the packet format, so a node that does not receive a packet correctly will not
transmit on the next hop in the path. Our cooperative transmission protocol consists of two
phases. In the routing phase, the initial path between the source and the sink nodes is discovered
as an underlying “one-node-thick” path. Then, the path undergoes a thickening process in the
“recruiting-and-transmitting” phase. In this phase, the nodes on the initial path become cluster
heads, which recruit additional adjacent nodes from their neighborhood.
3.3.1Advantages
A key advantage of cooperative transmission is the increase of the
received power at the receiving nodes. This decreases the probability of bit error and of packet
loss. Alternatively, the sender nodes can use smaller transmission power for the same probability
of bit error, thus reducing the energy consumption. One of the goals of this paper is to study the
energy savings achieved through cooperation. We also study the increase in the reliability of
packet delivery, given some level of cooperation among the nodes. Finally, we also study the
capacity of the cooperative transmission protocol.
3.4 FEASIBILITY STUDY
Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are aspects
in the feasibility study portion of the preliminary investigation:
Technical Feasibility
Operational Feasibility
Economical Feasibility
3.4.1 ECONOMIC FEASIBILITY
A system can be developed technically and that will be used if installed must still be a
good investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.
The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.
3.4.2 OPERATIONAL FEASIBILITY
Proposed projects are beneficial only if they can be turned out into information system.
That will meet the organization’s operating requirements. Operational feasibility aspects of the
project are to be taken as an important part of the project implementation. Some of the important
issues raised are to test the operational feasibility of a project includes the following: -
Is there sufficient support for the management from the users?
Will the system be used and work properly if it is being developed and implemented?
Will there be any resistance from the user that will undermine the possible application
benefits?
This system is targeted to be in accordance with the above-mentioned issues. Beforehand,
the management issues and user requirements have been taken into consideration. So there is no
question of resistance from the users that can undermine the possible application benefits.
The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.
3.4.3 TECHNICAL FEASIBILITY
The technical issue usually raised during the feasibility stage of the investigation includes
the following:
Does the necessary technology exist to do what is suggested?
Do the proposed equipments have the technical capacity to hold the data required to use the
new system?
Will the proposed system provide adequate response to inquiries, regardless of the number or
location of users?
Can the system be upgraded if developed?
Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation
System’. The current system developed is technically feasible. It is a web based user interface for
audit workflow at NIC-CSD. Thus it provides an easy access to the users. The database’s
purpose is to create, establish and maintain a workflow among various entities in order to
facilitate all concerned users in their various capacities or roles. Permission to the users would be
granted based on the roles specified. Therefore, it provides the technical guarantee of accuracy,
reliability and security. The software and hard requirements for the development of this project
are not many and are already available in-house at NIC or are available as free as open source.
The work for the project is done with the current equipment and existing software technology.
Necessary bandwidth exists for providing a fast feedback to the users irrespective of the number
of users using the system.
4. SYSTEM REQUIREMENTS SPECIFICATION
4.1 INTRODUCTION
A Software Requirements Specification (SRS) – a requirements specification for a software
system – is a complete description of the behavior of a system to be developed. It includes a set of
use cases that describe all the interactions the users will have with the software. In addition to use
cases, the SRS also contains non-functional requirements. Non-functional requirements are
requirements which impose constraints on the design or implementation (such as performance
engineering requirements, quality standards, or design constraints).
System requirements specification: A structured collection of information that embodies the
requirements of a system. A business analyst, sometimes titled system analyst, is responsible for
analyzing the business needs of their clients and stakeholders to help identify business problems
and propose solutions. Within the systems development life cycle domain, the BA typically
performs a liaison function between the business side of an enterprise and the information
technology department or external service providers. Projects are subject to three sorts of
requirements:
Business requirements describe in business terms what must be delivered or
accomplished to provide value.
Product requirements describe properties of a system or product (which could be one of
several ways to accomplish a set of business requirements.)
Process requirements describe activities performed by the developing organization. For
instance, process requirements could specify specific methodologies that must be
followed, and constraints that the organization must obey.
Product and process requirements are closely linked. Process requirements often specify the
activities that will be performed to satisfy a product requirement. For example, a maximum
development cost requirement (a process requirement) may be imposed to help achieve a
maximum sales price requirement (a product requirement); a requirement that the product be
maintainable (a Product requirement) often is addressed by imposing requirements to follow
particular development styles
4.2 PURPOSE
A systems engineering, a requirement can be a description of what a system must do, referred to
as a Functional Requirement. This type of requirement specifies something that the delivered
system must be able to do. Another type of requirement specifies something about the system
itself, and how well it performs its functions. Such requirements are often called Non-functional
requirements, or 'performance requirements' or 'quality of service requirements.' Examples of
such requirements include usability, availability, reliability, supportability, testability and
maintainability.
A collection of requirements define the characteristics or features of the desired system. A 'good'
list of requirements as far as possible avoids saying how the system should implement the
requirements, leaving such decisions to the system designer. Specifying how the system should
be implemented is called "implementation bias" or "solution engineering". However,
implementation constraints on the solution may validly be expressed by the future owner, for
example for required interfaces to external systems; for interoperability with other systems; and
for commonality (e.g. of user interfaces) with other owned products.
In software engineering, the same meanings of requirements apply, except that the focus of
interest is the software itself.
4.3 FUNCTIONAL REQUIREMENTS
4.4 NON FUNCTIONAL REQUIREMENTS
The major non-functional Requirements of the system are as follows
Usability
The system is designed with completely automated process hence there is no or less user
intervention.
Reliability
The system is more reliable because of the qualities that are inherited from the chosen platform
java. The code built by using java is more reliable.
Performance
This system is developing in the high level languages and using the advanced front-end and
back-end technologies it will give response to the end user on client system with in very less
time.
Supportability
The system is designed to be the cross platform supportable. The system is supported on a wide
range of hardware and any software platform, which is having JVM, built into the system.
Implementation
The system is implemented in web environment using struts framework. The apache tomcat is
used as the web server and windows xp professional is used as the platform.
Interface the user interface is based on Struts provides HTML Tag
4.5 HARDWARE REQUIREMENTS
System : Pentium IV 2.4 GHz.
Hard Disk : 40 GB.
Floppy Drive : 1.44 Mb.
Monitor : 15 VGA Color.
Mouse : Logitech.
Ram : 512 MB.
4.6 SOFTWARE REQUIREMENTS
Operating System : Windows xp , Linux
Language : Java1.4 or more
Technology : Swing, AWT
Back End : No Database
5. SYSTEM DESIGN
5.1 SYSTEM SPECIFICATIONS
OBJECTIVES
The objective of this sub-project is to develop tools and methods to support the earlier phases
of systems development; for implementation independent specification and verification, and for
subsequent synthesis of specifications into efficient implementations.
The sub-project is divided into four sub-tasks:
i) adopt/further develop a model for formal, high-level system specification and verification.
ii) Demonstrate the efficacy of the developed model by applying it to a suitable part of the
consortium demonstrator, the network terminal for broadband access.
iii) Develop a systematic method to refine the specification into synthesizable code and a
prototype tool which supports the refinement process and links it to synthesis and compilation
tools.
Justification
Today it is common to specify systems on higher levels using some natural language (e.g.
English). For large systems, where large amounts of information must be handled, problems arise
with ambiguities and inconsistencies with such specifications. Errors that are introduced are
often detected late in the design cycle - in the simulation of the design after much design work
has already been carried out - if detected at all.
By making the initial system specifications in a formal language at a high abstraction level,
functionality can be verified/simulated earlier in the development process. Ambiguities and in
consistencies can be avoided, errors can be discovered earlier, and the design iteration cycles can
be shortened, thereby reducing development times. It is of critical importance that the
specification language provides modeling concepts at a high abstraction level to allow the
representation of system functions at a conceptual level without introducing unnecessary details.
Further, most of the languages that are used for implementation of HW / SWdesigns (e.g.
VHDL, C++) do not lend themselves well to formal verification. This is because they lack a
formally defined semantics or because the semantics is complex. A lack of formal semantics
sometimes causes ambiguities in the interpretation of the designs.
Our goal is to develop functional system specification method for telecom systems, to
demonstrate its efficacy on an industrially relevant example and to develop a tool to support the
mapping of such specifications to synthesizable VHDL/C++. The specification language in
which the system level functions will be developed will have a formal semantics in order to
support formal verification of specifications.
5.2 SYSTEM COMPONENTS:
The diagram shows a general view of how desktop and workstation computers are organized.
Different systems have different details, but in general all computers consist of components
(processor, memory, controllers, video) connected together with a bus. Physically, a bus consists
of many parallel wires, usually printed (in copper) on the main circuit board of the computer.
Data signals, clock signals, and control signals are sent on the bus back and forth between
components. A particular type of bus follows a carefully written standard that describes the
signals that are carried on the wires and what the signals mean. The PCI standard (for example)
describes the PCI bus used on most current PCs.
6. IMPLEMENTATION
Introduction
A project is a series of activities that aim at solving particular problems within a
given time frame and in a particular location. The activities include time, money, human and
material resources. Before achieving the objectives, a project goes through several stages.
Implementation should take place at and be integrated into all stages of the project cycle.
Implementation is the stage where all the planned activities are put into action. The
most crucial stage in achieving a new successful system and in giving confidence on the system
for the users that will work efficiently and effectively.
The query optimizer is the component of a database management system that attempts
to determine the most efficient way to execute a query. The optimizer considers the possible
query plans for a given input query, and attempts to determine which of those plans will be the
most efficient. Cost-based query optimizers assign an estimated "cost" to each possible query
plan, and choose the plan with the smallest cost. Costs are used to estimate the runtime cost of
evaluating the query, in terms of the number of I/O operations required, the CPU requirements,
and other factors determined from the data dictionary. The set of query plans examined is formed
by examining the possible access paths (e.g. index scan, sequential scan) and join algorithms
(e.g. sort-merge join, hash join, nested loops). The search space can become quite large
depending on the complexity of the SQL query.
Cost estimation models are mathematical algorithms or parametric equations used to
estimate the costs of a product or project. The results of the models are typically necessary to
obtain approval to proceed, and are factored into business plans, budgets, and other financial
planning and tracking mechanisms.
The system can be implemented only after through testing and if it is found to work
according to the specification. It involves careful planning, investigation of the current system
and its constraints on implementation.
SAMPLE CODE:
Main.java
package sensor;
import javax.swing.JOptionPane;
import javax.swing.JFrame;
import java.net.InetAddress;
import javax.swing.UIManager;
public class Main {
public static void main(String[] args) {
// TODO code application logic here
try
{
String addr=InetAddress.getLocalHost().getHostAddress()
try
{
// UIManager.setLookAndFeel("ch.randelshofer.quaqua.QuaquaLookAndFeel");
}
catch (Exception ex)
{
System.out.println("Failed loading L&F: ");
//System.out.println(ex);
}
Details dt=new Details();
String nodeId=(String)JOptionPane.showInputDialog(new JFrame(), "Enter Your Id");
// String pt=(String)JOptionPane.showInputDialog(new JFrame(), "Enter Your Port");
String no=(String)JOptionPane.showInputDialog(new JFrame(), "Enter no of known Nodes");
int Nodecnt=Integer.parseInt(no);
for(int i=0;i<Nodecnt;i++)
{
String id=(String)JOptionPane.showInputDialog(new JFrame(), "Enter Id");
dt.info[dt.index][0]=id;
dt.info[dt.index][1]=(String)JOptionPane.showInputDialog(new JFrame(), "Enter Cost");
int t=9000+Integer.parseInt(id);
dt.info[dt.index][2]=String.valueOf(t)
// dt.info[dt.index][1]=(String)JOptionPane.showInputDialog(new JFrame(), "Enter IP","127.0.0.1");
// dt.info[dt.index][2]=(String)JOptionPane.showInputDialog(new JFrame(), "Enter Port"
dt.index++;
//NodeFrame nf=new NodeFrame(nodeId,pt);
NodeFrame nf=new NodeFrame(nodeId,nodeId);
nf.setVisible(true);
nf.setTitle("Node "+nodeId)
//Receiver rr=new Receiver(nf,nodeId,pt);
Receiver rr=new Receiver(nf,nodeId,nodeId);
rr.start();
}
catch(Exception e)
{
e.printStackTrace();
}}}
NodeFrame.java
package sensor;
import java.net.*;
import javax.swing.JOptionPane;
import javax.swing.JFrame;
import javax.swing.table.DefaultTableModel;
import java.util.Vector;
import java.util.ArrayList;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.UIManager;
import javax.swing.ImageIcon;
public class NodeFrame extends javax.swing.JFrame
{
Details dt=new Details();
String myid;
String mypt;
Receiver rr;
public static ArrayList prepath;
public static ArrayList preip;
public static ArrayList preport;
public static ArrayList precost;
public static ArrayList preid;
String myip;
public static int mn=0;
/** Creates new form NodeFrame */
public NodeFrame(String s1,String s2)
{
myid=s1;
//mypt=s2;
int c=9000+Integer.parseInt(s2);
mypt=String.valueOf(c);
try
{
//UIManager.setLookAndFeel("ch.randelshofer.quaqua.QuaquaLookAndFeel");
}
catch (Exception ex)
{
System.out.println("Failed loading L&F: ");
//System.out.println(ex);
}
initComponents();
JPanel pane=(JPanel)this.getContentPane();;
JLabel lab=new JLabel();
//lab.setBounds(-1,0,660,590);
lab.setBounds(-1,0,695,600);
//lab.setIcon(new ImageIcon("images//cn.jpg"));
lab.setIcon(new ImageIcon("images//img5.jpg"));
pane.add(lab);
setDefaultLookAndFeelDecorated(true);
setResizable(false);
}
@SuppressWarnings("unchecked")
// <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents
private void initComponents() {
jTabbedPane1 = new javax.swing.JTabbedPane();
jPanel3 = new javax.swing.JPanel();
jScrollPane1 = new javax.swing.JScrollPane();
jTable1 = new javax.swing.JTable();
jPanel2 = new javax.swing.JPanel();
jButton1 = new javax.swing.JButton();
jPanel5 = new javax.swing.JPanel();
jPanel6 = new javax.swing.JPanel();
jScrollPane3 = new javax.swing.JScrollPane();
jTable3 = new javax.swing.JTable();
jPanel4 = new javax.swing.JPanel();
jLabel2 = new javax.swing.JLabel();
jTextField1 = new javax.swing.JTextField();
jButton2 = new javax.swing.JButton();
jScrollPane2 = new javax.swing.JScrollPane();
jTable2 = new javax.swing.JTable();
jLabel4 = new javax.swing.JLabel();
jTextField3 = new javax.swing.JTextField();
jButton4 = new javax.swing.JButton();
setDefaultCloseOperation(javax.swing.WindowConstants.EXIT_ON_CLOSE);
jTabbedPane1.setFont(new java.awt.Font("Tahoma", 0, 14));
jTable1.setModel(new javax.swing.table.DefaultTableModel(
new Object [][] {
},
new String [] {
"Path"
}
));
jScrollPane1.setViewportView(jTable1);
javax.swing.GroupLayout jPanel3Layout = new javax.swing.GroupLayout(jPanel3);
jPanel3.setLayout(jPanel3Layout);
jPanel3Layout.setHorizontalGroup(
jPanel3Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel3Layout.createSequentialGroup()
.addGap(62, 62, 62)
.addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 480, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(55, Short.MAX_VALUE))
);
jPanel3Layout.setVerticalGroup(
jPanel3Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, jPanel3Layout.createSequentialGroup()
.addContainerGap(61, Short.MAX_VALUE)
.addComponent(jScrollPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 334, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(31, 31, 31))
);
jTabbedPane1.addTab("Known Node Path List", jPanel3);
jButton1.setFont(new java.awt.Font("Tahoma", 0, 12));
jButton1.setText("Construct");
jButton1.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton1ActionPerformed(evt);
}
});
javax.swing.GroupLayout jPanel2Layout = new javax.swing.GroupLayout(jPanel2);
jPanel2.setLayout(jPanel2Layout);
jPanel2Layout.setHorizontalGroup(
jPanel2Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel2Layout.createSequentialGroup()
.addGap(260, 260, 260)
.addComponent(jButton1)
.addContainerGap(250, Short.MAX_VALUE))
);
jPanel2Layout.setVerticalGroup(
jPanel2Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel2Layout.createSequentialGroup()
.addGap(138, 138, 138)
.addComponent(jButton1)
.addContainerGap(263, Short.MAX_VALUE))
);
jTabbedPane1.addTab("Construction", jPanel2);
javax.swing.GroupLayout jPanel6Layout = new javax.swing.GroupLayout(jPanel6);
jPanel6.setLayout(jPanel6Layout);
jPanel6Layout.setHorizontalGroup(
jPanel6Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGap(0, 488, Short.MAX_VALUE)
);
jPanel6Layout.setVerticalGroup(
jPanel6Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGap(0, 360, Short.MAX_VALUE)
);
jTable3.setModel(new javax.swing.table.DefaultTableModel(
new Object [][] {
},
new String [] {
"Source", "Destination", "Path", "Message"
}
));
jScrollPane3.setViewportView(jTable3);
javax.swing.GroupLayout jPanel5Layout = new javax.swing.GroupLayout(jPanel5);
jPanel5.setLayout(jPanel5Layout);
jPanel5Layout.setHorizontalGroup(
jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, jPanel5Layout.createSequentialGroup()
.addContainerGap(94, Short.MAX_VALUE)
.addComponent(jScrollPane3, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(49, 49, 49))
.addGroup(jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel5Layout.createSequentialGroup()
.addGap(41, 41, 41)
.addComponent(jPanel6, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(68, Short.MAX_VALUE)))
);
jPanel5Layout.setVerticalGroup(
jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, jPanel5Layout.createSequentialGroup()
.addContainerGap(90, Short.MAX_VALUE)
.addComponent(jScrollPane3, javax.swing.GroupLayout.PREFERRED_SIZE, 314, javax.swing.GroupLayout.PREFERRED_SIZE)
.addGap(22, 22, 22))
.addGroup(jPanel5Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, jPanel5Layout.createSequentialGroup()
.addContainerGap(55, Short.MAX_VALUE)
.addComponent(jPanel6, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap()))
);
jTabbedPane1.addTab("Message Details", jPanel5);
jLabel2.setFont(new java.awt.Font("Tahoma", 0, 12));
jLabel2.setText("Enter Destination");
jButton2.setText("Get Route");
jButton2.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton2ActionPerformed(evt);
}
});
jTable2.setModel(new javax.swing.table.DefaultTableModel(
new Object [][] {
},
new String [] {
"Route for Destination"
}
));
jScrollPane2.setViewportView(jTable2);
jLabel4.setFont(new java.awt.Font("Tahoma", 0, 12));
jLabel4.setText("Enter Message");
jButton4.setText("Send ");
jButton4.setEnabled(false);
jButton4.addActionListener(new java.awt.event.ActionListener() {
public void actionPerformed(java.awt.event.ActionEvent evt) {
jButton4ActionPerformed(evt);
}
});
javax.swing.GroupLayout jPanel4Layout = new javax.swing.GroupLayout(jPanel4);
jPanel4.setLayout(jPanel4Layout);
jPanel4Layout.setHorizontalGroup(
jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel4Layout.createSequentialGroup()
.addGap(122, 122, 122)
.addGroup(jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel4Layout.createSequentialGroup()
.addComponent(jLabel4)
.addGap(18, 18, 18)
.addComponent(jTextField3, javax.swing.GroupLayout.DEFAULT_SIZE, 249, Short.MAX_VALUE))
.addGroup(jPanel4Layout.createSequentialGroup()
.addComponent(jLabel2)
.addGap(40, 40, 40)
.addComponent(jTextField1, javax.swing.GroupLayout.PREFERRED_SIZE, 51, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addGap(29, 29, 29)
.addGroup(jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.TRAILING, false)
.addComponent(jButton4, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)
.addComponent(jButton2, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE))
.addContainerGap(16, Short.MAX_VALUE))
.addGroup(jPanel4Layout.createSequentialGroup()
.addContainerGap()
.addComponent(jScrollPane2, javax.swing.GroupLayout.PREFERRED_SIZE, 575, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(12, Short.MAX_VALUE))
);
jPanel4Layout.setVerticalGroup(
jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel4Layout.createSequentialGroup()
.addGap(39, 39, 39)
.addGroup(jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE)
.addComponent(jLabel2)
.addComponent(jTextField1, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addComponent(jButton2))
.addGroup(jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(jPanel4Layout.createSequentialGroup()
.addGap(24, 24, 24)
.addGroup(jPanel4Layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE)
.addComponent(jLabel4)
.addComponent(jTextField3, javax.swing.GroupLayout.PREFERRED_SIZE, 27, javax.swing.GroupLayout.PREFERRED_SIZE)))
.addGroup(jPanel4Layout.createSequentialGroup()
.addGap(18, 18, 18)
.addComponent(jButton4)))
.addGap(55, 55, 55)
.addComponent(jScrollPane2, javax.swing.GroupLayout.PREFERRED_SIZE, 205, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(53, Short.MAX_VALUE))
);
jTabbedPane1.addTab("Routing Path", jPanel4);
javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane());
getContentPane().setLayout(layout);
layout.setHorizontalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(layout.createSequentialGroup()
.addGap(34, 34, 34)
.addComponent(jTabbedPane1, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap(290, Short.MAX_VALUE))
);
layout.setVerticalGroup(
layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING)
.addGroup(javax.swing.GroupLayout.Alignment.TRAILING, layout.createSequentialGroup()
.addContainerGap(115, Short.MAX_VALUE)
.addComponent(jTabbedPane1, javax.swing.GroupLayout.PREFERRED_SIZE, 454, javax.swing.GroupLayout.PREFERRED_SIZE)
.addContainerGap())
);
pack();
}// </editor-fold>//GEN-END:initComponents
private void jButton1ActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_jButton1ActionPerformed
// TODO add your handling code here:
// Construct Button
System.out.println("index "+dt.index);
try
{
javax.swing.table.DefaultTableModel dm = (javax.swing.table.DefaultTableModel)jTable1.getModel();
String ip=InetAddress.getLocalHost().getHostAddress();
for(int i=0;i<dt.index;i++)
{
Vector v=new Vector();
String pp=myid+"-->"+dt.info[i][0];
//String p1=ip+"-->"+dt.info[i][1];
String p1=dt.info[i][1];
String p2=mypt+"-->"+dt.info[i][2];
System.out.println(pp);
v.add(pp);
dm.addRow(v);
dt.path.add(pp);
//dt.ipList.add(p1);
dt.costList.add(p1);
dt.portList.add(p2);
}
DatagramSocket ds=new DatagramSocket();
byte data[]=new byte[1000];
//String addr=InetAddress.getLocalHost().getHostAddress();
String addr="127.0.0.1";
//String ss="GetPath"+"#"+addr+"#"+mypt;
// System.out.println(ss);
//data=ss.getBytes();
for(int i=0;i<dt.index;i++)
{
String ss="GetPath"+"#"+addr+"#"+mypt+"#"+dt.info[i][1];
//System.out.println(ss);
data=ss.getBytes();
//DatagramPacket dp=new DatagramPacket(data,0,data.length,InetAddress.getByName(dt.info[i][1]),Integer.parseInt(dt.info[i][2]));
DatagramPacket dp=new DatagramPacket(data,0,data.length,InetAddress.getByName("127.0.0.1"),Integer.parseInt(dt.info[i][2]));
ds.send(dp);
//System.out.println("send to :"+dt.info[i][1]+" : "+dt.info[i][2]);
System.out.println("get path send to :"+dt.info[i][2]);
}
jButton1.setEnabled(false);
}
catch(Exception e)
{
e.printStackTrace();
}
}//GEN-LAST:event_jButton1ActionPerformed
private void jButton2ActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_jButton2ActionPerformed
// TODO add your handling code here:
// Get Route Button
prepath=new ArrayList();
preip=new ArrayList();
preport=new ArrayList();
precost=new ArrayList();
DefaultTableModel dtm2 = (DefaultTableModel)jTable2.getModel();
int row1=dtm2.getRowCount();
for(int i=0;i<row1;i++)
{
dtm2.removeRow(0);
}
String did=jTextField1.getText().trim();
//javax.swing.table.DefaultTableModel dm1 = (javax.swing.table.DefaultTableModel)jTable1.getModel();
//int rcnt=dm1.getRowCount();
int cc=0;
ArrayList al=new ArrayList();
for(int i=0;i<dt.path.size();i++)
{
String ss=dt.path.get(i).toString();
//String ss1=dt.ipList.get(i).toString();
String ss1=dt.costList.get(i).toString();
String ss2=dt.portList.get(i).toString();
int ind1=ss.lastIndexOf('>')+1;
String dd=ss.substring(ind1);
//System.out.println(ss+" : "+dd+" : "+id);
if(dd.equals(did))
{
cc++;
al.add(ss);
prepath.add(ss);
//preip.add(ss1);
precost.add(ss1);
preport.add(ss2);
}
}
System.out.println("path list "+cc+" : "+al.size()+" : "+al);
try
{
myip=InetAddress.getLocalHost().getHostAddress();
//String ms="FindPath#"+myid+"#"+myip+"#"+mypt+"#"+did;
//String ms="FindPath#"+myid+"#"+"127.0.0.1"+"#"+mypt+"#"+did;
//byte bt[]=ms.getBytes();
DatagramSocket ds=new DatagramSocket();
for(int i=0;i<dt.index;i++)
{
String ms="FindPath#"+myid+"#"+dt.info[i][1]+"#"+mypt+"#"+did;
byte bt[]=ms.getBytes();
//DatagramPacket dp=new DatagramPacket(bt,0,bt.length,InetAddress.getByName(dt.info[i][1]),Integer.parseInt(dt.info[i][2]));
DatagramPacket dp=new DatagramPacket(bt,0,bt.length,InetAddress.getByName("127.0.0.1"),Integer.parseInt(dt.info[i][2]));
ds.send(dp);
}
/*System.out.println("Main route "+mn+prepath.get(mn).toString());
Vector v=new Vector();
v.add(prepath.get(mn));
dtm2.addRow(v);*/
jButton4.setEnabled(true);
}
catch(Exception e)
{
e.printStackTrace();
}
}//GEN-LAST:event_jButton2ActionPerformed
private void jButton4ActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_jButton4ActionPerformed
// TODO add your handling code here:
try
{
// send message
String did=jTextField1.getText().trim();
String mes=jTextField3.getText().trim();
ArrayList dpath=new ArrayList();
ArrayList dcost=new ArrayList();
DefaultTableModel tm1=(DefaultTableModel)jTable2.getModel();
int row=tm1.getRowCount();
int fl=0;
for(int i=0;i<row;i++)
{
String s1=tm1.getValueAt(i, 0).toString();
String s2[]=s1.split("-->");
if(s2.length>=3) // s2.length>3
{
dpath.add(s1);
dcost.add(precost.get(i));
fl=1;
}
}
System.out.println("dpath "+dpath);
System.out.println("dcost "+dcost);
System.out.println("fl "+fl);
if(fl==0)
{
DatagramSocket ds=new DatagramSocket();
String ms="Message#"+myid+"#"+did+"#"+myid+"-->"+did+"#"+mes+"#"+did;
byte dd[]=ms.getBytes();
DatagramPacket dp=new DatagramPacket(dd,0,dd.length,InetAddress.getByName("127.0.0.1"),Integer.parseInt(did)+9000);
ds.send(dp);
}
else
{
String a1[]=dcost.get(0).toString().split("-->");
int cd1=0;
for(int i=0;i<a1.length;i++)
{
cd1=cd1+Integer.parseInt(a1[i]);
}
int lc=cd1;
String lp=dpath.get(0).toString();
for(int i=1;i<dcost.size();i++)
{
String a2[]=dcost.get(i).toString().split("-->");
int cd2=0;
for(int j=0;j<a2.length;j++)
{
cd2=cd2+Integer.parseInt(a2[j]);
}
if(cd1>cd2)
{
lc=cd2;
lp=dpath.get(i).toString();
cd1=cd2;
}
}
System.out.println("request to ---> "+lp+" : "+lc);
DatagramSocket ds=new DatagramSocket();
String re[]=lp.split("-->");
for(int i=1;i<re.length;i++)
{
String sg="Message#"+re[i-1]+"#"+re[i]+"#"+lp+"#"+mes+"#"+did;
byte bte[]=sg.getBytes();
DatagramPacket dp=new DatagramPacket(bte,0,bte.length,InetAddress.getByName("127.0.0.1"),Integer.parseInt(re[i])+9000);
ds.send(dp);
}
String sd1[]=dcost.get(0).toString().split("-->");
int cs1=Integer.parseInt(sd1[0]);
int tc=cs1;
String sd2[]=dpath.get(0).toString().split("-->");
String tn=sd2[1];
for(int i=1;i<dcost.size();i++)
{
String sa[]=dcost.get(i).toString().split("-->");
int cs2=Integer.parseInt(sa[0]);
if(cs1>cs2)
{
tc=cs2;
String sa1[]=dpath.get(i).toString().split("-->");
tn=sa1[1];
cs1=cs2;
}
}
// System.out.println("request to ---> "+tn+" : "+tc);
String ms="Request"+"#"+myid+"#"+tn+"#"+did+"#"+lp+"#"+lc;
byte bte[]=ms.getBytes();
DatagramPacket dp=new DatagramPacket(bte,0,bte.length,InetAddress.getByName("127.0.0.1"),tc+9000);
ds.send(dp);
}
}
catch(Exception e)
{
e.printStackTrace();
}
}//GEN-LAST:event_jButton4ActionPerformed
/**
* @param args the command line arguments
*/
public static void main(String args[]) {
java.awt.EventQueue.invokeLater(new Runnable() {
public void run() {
// new NodeFrame().setVisible(true);
}
});
}
// Variables declaration - do not modify//GEN-BEGIN:variables
private javax.swing.JButton jButton1;
private javax.swing.JButton jButton2;
private javax.swing.JButton jButton4;
private javax.swing.JLabel jLabel2;
private javax.swing.JLabel jLabel4;
private javax.swing.JPanel jPanel2;
private javax.swing.JPanel jPanel3;
private javax.swing.JPanel jPanel4;
private javax.swing.JPanel jPanel5;
private javax.swing.JPanel jPanel6;
private javax.swing.JScrollPane jScrollPane1;
private javax.swing.JScrollPane jScrollPane2;
private javax.swing.JScrollPane jScrollPane3;
private javax.swing.JTabbedPane jTabbedPane1;
public javax.swing.JTable jTable1;
public static javax.swing.JTable jTable2;
public javax.swing.JTable jTable3;
private javax.swing.JTextField jTextField1;
private javax.swing.JTextField jTextField3;
// End of variables declaration//GEN-END:variables
}
7. SYSTEM TESTING
General
Software Testing is the process used to help identify the correctness, completeness,
security and quality of developed computer software. Testing is a process of technical
investigation, performed on behalf of stakeholders, that is intended to reveal quality-related
information about the product with respect to the context in which it is intended to operate. In
general, software engineers distinguish software faults from software failures. Our project"
Visual cryptography For Cheating Prevention” is tested with the following testing
methodologies.
Developing Methodologies
The test process begins by developing a comprehensive plan to test the general
functionality and special features on a variety of platform combinations. Strict quality control
procedures are used. The process verifies that the application meets the requirements specified in
the system requirements document and is bug free. The following are the considerations used to
develop the framework for developing the test methodologies.
Acquire and study the test strategy
A team very familiar with the business risks associated with the software normally
develops test strategy, the test team develops tactics. Thus the test team needs to acquire and
study the test strategy. The test tactics are analyzed and studied for finding our various test
factors, risks and effects. The risk involved in our project is implementing the encoding of the
image. So, the proper knowledge about the testing strategies should be gained in order to avoid
such high level risks.
Determine the type of development project
The type of the development refers to the platform or methodology for developing the
project. As it is been a simulation project we go for the prototyping. The prototypes are simply
predefined structure or model, which can be used for further modeling. By using the prototypes
we can modify the existing module of the application for some other specific operations. Here
the test tactics is to verify that all the tools are used properly and to test functionality.
Determine the type of software system
The type of software system relates to the type of processing which will be encountered
by the system. In this project, the software system we prefer to use is Java . We have chosen Java
for its portability and its support to graphics & multimedia specifically for image processing.
Determine the scope of the software system
The scope of the project refers to the overall activities or operation to be included into the
system being tested. The scope of the new system varies from that of the existing one. In the
existing system, a large overhead occurs in contrast and pixel expansion. Also, the verification
process is not efficient in the existing system. In this project, the pixel expansion is optimal
because only two sub pixels are added each and every pixel. Also, each and every participants
are verified or authentication.
Identify the tactical risks
The tactical risk is the subsets at a lower level of the strategic risks. The risks related to
the application and its methodologies are identified. The risk involved in our project is
implementing the encoding of the image.
Determine when the testing should occur
In the above processes we have identified the type of processing, scope and risks
associated with our project. The testing can occur throughout all the phases of the project.
During the analysis phase, the testing strategy and requirements are determined. In design phase,
the complexities in design with respect to the requirements are determined and structural and
functional test conditions are also tested. During implementation, the design consistency is
determined. In test phase, the overall testing of the application is being done and previously the
adequacy of the testing plan is also determined. In maintenance phase, the testing for modifying
and reusing the system is done.
Build the system test plan
The test plan of the project should provide all the information on the application that is
being tested. The test plan is simply a model that has to be followed during the progression of the
testing. The test plan consists of the sequential set of procedures to test the application. Initially,
the selection process of both secret and verification images are tested. Then the test is carried out
for encoding of image, verification process and finally decoding process.
Build the unit test plan
In this case we are dividing the system into three different components or units each
having specific functions. The three different components of the system are browser window
designing, browser events handling and adding speech to the browser. These units have their
own test plan. The main purpose of the unit test plan is to eliminate the errors and bugs during
the initial stage of the implementation. As the errors get debugged in the initial stage, the less
complex the overall testing after integrating all the units of the system. The unit testing plan can
be either simple or complex based on the functionality of that unit.
TESTING TECHNIQUE - TOOL SELECTION PROCESS
In this process the appropriate testing process is selected from various testing
methodologies such as prototyping model, waterfall model etc and the selection is done by the
means of analyzing the nature of the project. We go for Waterfall model.
Select test factor
This phase selects the appropriate test factor. The particular module of the project which
is essential for the testing methodologies is sorted out first. This will help the testing process to
be completed within time. The test factors for our project include encoding, verification and
decoding process.
Determine SDLC phase
This phase involves the structural testing of the project which will be used for easy
implementations of the functions. Though structural testing is so much associated with the
coding phase, the structural testing should be carried out at all the phases of the lifecycle. These
evaluates that all the structures are tested and sound.
Identify the criteria to test
In this phase the testing unit is trained with the necessary constraints and limit with which
the project is to be tested. In our project the testing unit is trained to test whether the image to be
encoded is in the PGM format.
Select type of test
Individual responsible for testing may prefer to select their own technique and tool based
on the test situation. For selecting the appropriate testing process the project should be analyzed
with the following three testing concepts:
1. Structural versus functional testing
2. Dynamic versus static testing
3. Manual versus automatic testing
After analyzing through the above testing concepts we divided to test our project in
Waterfall model testing methodology.
Structural Testing
Structural analysis based test sets are tend to uncover errors that occur during coding of
the program. The properties of the test set are to reflect the internal structure of the program.
Structural testing is designed to verify that the developed system and programs work as specified
in the requirement. The objective is to ensure that the product is designed structurally sound and
will function correctly.
Functional Testing
Functional testing ensures that the requirements are properly satisfied by the application
system. The functions are those tasks that the system is designed to accomplish. This is not
concerned with how processing occurs but rather with the results of the processing. The
functional analysis based test sets tend to uncover errors that occurred in implementing
requirements or design specifications.
Select technique
After selecting the appropriate testing methodology we have to select the necessary
testing technique such as stress testing, execution testing, recovery testing, operation testing,
compliance testing and security testing. We are performing operation testing
Figure 8.1 Testing technique and tool selection process
Select test method
We have to select the testing method which is to be carried out throughout the lifecycle.
The two different methods are static and dynamic. Dynamic testing needs the program to be
executed completely before testing. This is a traditional concept where the faults detected at the
end will be very hard to rectify. In static process the program is tested for each and every line
and the testing process is allowed to pass through only after rectifying the occurred fault. These
make this process more expensive, so a combination of both static and dynamic testing method
Mode of testing
It is necessary to select the test mode in which the testing method to be carried out. The
two different modes are manual and automated tool. The real time projects needs frequent
interactions. So, it is impossible to carry out the testing process by means of automated tool. Our
project uses manual testing.
Unit test technique
This phase examines the techniques, assessment and management of unit testing and
analysis. Testing and analysis strategies are categorized according to whether they goal is
functional or structural or combination of these. It will assist a software engineer to define,
conduct and evaluate unit tests and to assess new unit test techniques.
System Testing
Once the entire system has been built then it has to be tested against the "System
Specification" to check if it delivers the features required. It is still developer focused, although
specialist developers known as systems testers are normally employed to do it. In essence
System Testing is not about checking the individual parts of the design, but about checking the
system as a whole. In effect it is one giant component. System testing can involve a number of
specialist types of test to see if all the functional and non-functional requirements have been met.
Acceptance Testing
Acceptance Testing checks the system against the "Requirements". It is similar to system
s testing in that the whole system is checked but the important difference is the change in focus.
Systems Testing checks that the system that was specified has been delivered. Acceptance Testin
g checks that the system delivers what was requested. The customer, and not the developer shoul
d always do acceptance testing. The customer knows what is required from the system to achieve
value in the business and is the only person qualified to make that judgment.
Regression Testing
This involves assurance that all aspects of an application system remain
functional after testing. The introduction of change is the cause of problems in previously tested
segments. It is retesting unchanged segments of the application system. It normally involves
rerunning tests that have been previously executed to ensure that the same results can be
achieved currently as achieved when the segments were last tested.
7.2 Test cases
+VE TEST CASES
S .No
Test case Description
Actual value Expected value
Result
1 True
2 True
3 True
4 True
-VE TEST CASES
S .No
Test case Description
Actual value Expected value
Result
1 False
2 False
3 False
4 False
Types of Testing:
Smoke Testing: is the process of initial testing in which tester looks for the availability
of all the functionality of the application in order to perform detailed testing on them. (Main
check is for available forms)
Sanity Testing: is a type of testing that is conducted on an application initially to check
for the proper behavior of an application that is to check all the functionality are available before
the detailed testing is conducted by on them.
Regression Testing: is one of the best and important testing. Regression testing is the
process in which the functionality, which is already tested before, is once again tested whenever
some new change is added in order to check whether the existing functionality remains same.
Re-Testing: is the process in which testing is performed on some functionality which is
already tested before to make sure that the defects are reproducible and to rule out the
environments issues if at all any defects are there.
Static Testing: is the testing, which is performed on an application when it is not been
executed. Ex: GUI, Document Testing
Dynamic Testing: is the testing which is performed on an application when it is being
executed. Ex: Functional testing.
Alpha Testing: it is a type of user acceptance testing, which is conducted on an
application when it is just before released to the customer.
Beta-Testing: it is a type of UAT that is conducted on an application when it is released
to the customer, when deployed in to the real time environment and being accessed by the real
time users.
Monkey Testing: is the process in which abnormal operations, beyond capacity
operations are done on the application to check the stability of it in spite of the users abnormal
behavior.
Compatibility testing: it is the testing process in which usually the products are tested
on the environments with different combinations of databases (application servers, browsers…
etc) In order to check how far the product is compatible with all these environments platform
combination.
Installation Testing: it is the process of testing in which the tester try to install or try to
deploy the module into the corresponding environment by following the guidelines produced in
the deployment document and check whether the installation is successful or not.
Adhoc Testing: Adhoc Testing is the process of testing in which unlike the formal
testing where in test case document is used, with out that test case document testing can be done
of an application, to cover that testing of the future which are not covered in that test case
document. Also it is intended to perform GUI testing which may involve the cosmetic issues.
7.3 Results and Discussions
Energy-Efficient Protocol for Cooperative Networks
1. Running the project:
2.
Menu for showing co operative path:
Construction:
Message Details:
Routing path
When we press Construction Button It will show the path from Single node to Multiple nodes
12, 13, 14 are the directed paths from Node 1.According to above registration with Costs
Again u run the Co operative transmission for next node
Node 5:
The route from Node 5 to Node 6,Node 7 is Constructed with costs 13,14
Like this we done Communication with Node 6 to 2 nodes
677,69 Communication
Message Details processing:
Communication with 44:
Communication with 35,36 :-
37,377,39 will comes automatically will take the Support of Already existing Nodes
Node 2 2,223 Communication:-
Sending message from Node 13
13 with data “I m from Node 1 to 3”
Then Click Node 3 From Execution trace i.e
Then Go for Node 3 click Message details Button
Node 77 Registration:
Send data to receiver node 77 :
We send data from Node 2 77
(When we click Get Route It does n’t give Route Because No Route Establishment with this )
Node 22:
Node 23:
Now we send data from 1 to 77
177 via Node 3,Node 6
Observe output at 3
Observe output at 6
Observe output at 77
Energy efficient at Index wise Based
Cost links with Cluster Heads:
dt path [1-->2, 1-->3, 1-->4, 1-->3-->3, 1-->3-->3-->3, 1-->3-->6-->77, 1-->3-->3-->6-->77, 1-->3-->6, 1-->3-->3-->6, 1-->3-->5-->6, 1-->3-->3-->3-->6]
costList [12, 14, 14, 14-->33, 14-->33-->33, 14-->76-->8, 14-->33-->76-->8, 14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
pre cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
path cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
costList [12, 14, 14, 14-->33, 14-->33-->33, 14-->76-->8, 14-->33-->76-->8, 14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
pre cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
path cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13]
as1 1-->2-->23-->5-->6
dt path [1-->2, 1-->3, 1-->4, 1-->3-->3, 1-->3-->3-->3, 1-->3-->6-->77, 1-->3-->3-->6-->77, 1-->3-->6, 1-->3-->3-->6, 1-->3-->5-->6, 1-->3-->3-->3-->6, 1-->3-->3-->5-->6]
costList [12, 14, 14, 14-->33, 14-->33-->33, 14-->76-->8, 14-->33-->76-->8, 14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13, 12-->22-->55-->13]
pre cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13, 12-->22-->55-->13]
path cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13, 12-->22-->55-->13]
costList [12, 14, 14, 14-->33, 14-->33-->33, 14-->76-->8, 14-->33-->76-->8, 14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13, 12-->22-->55-->13]
pre cost [14-->76, 14-->33-->76, 14-->55-->13, 14-->33-->33-->76, 14-->33-->55-->13, 12-->22-->55-->13]
8 CONCLUSION AND FUTURE ENHANCEMENTS
8.1 Conclusion
We evaluated the performance of cooperative transmission, where nodes
in a sending cluster are synchronized to communicate a packet to nodes in a receiving cluster. In
our communication model, the power of the received signal at each node of the receiving cluster
is a sum of the powers of the transmitted independent signals of the nodes in the sending cluster.
The increased power of the received signal, is the traditional single node to single node
communication, leads to overall saving in network energy and to end-to-end robustness to data
loss.
We proposed an energy-efficient cooperative protocol, and we analyzed the robustness of
the protocol to data packet loss. When the nodes are placed on a grid and as compared to the
disjoint-paths scheme, we showed that our cooperative protocol reduces the probability of failure
to deliver a packet to destination by a factor of up to 100, depending on the values of considered
Parameters. Similarly, compared to the CAN protocol and to the one-path scheme, this reduction
amounts to a factor of up to 10 000. Our study also analyzed the capacity upper bound of our
protocol, showing improvement over the corresponding values of the other three protocols.
The total energy consumption was analytically computed, illustrating substantial energy
savings. For example, when nodes are positioned on a grid, the energy savings of our cooperative
protocol over the CAN protocol is up to 80%. The size of the clusters, , should be relatively
small, when the inter-cluster distance is small, with the optimal value of increasing with . For
scenarios that are not covered by our theoretical analysis, we used simulation to evaluate and
compare the protocols. For random placement of nodes, the simulation results show that our
cooperative transmission protocol saves up to 20% of energy compared to the CAN protocol and
up to 40% of energy compared with the disjoint-paths and the one-path scheme. Overall, the
study demonstrates that the energy savings of our protocol, relative to the other schemes, do not
substantially decrease even when the data packet loss approaches 50%. Our protocol also
supports larger capacity and lower delay under high-load conditions, as compared to the CAN
protocol, the one-path scheme, and the disjoint-paths scheme.