An Iterative Algorithmfor Trust Management
and Adversary Detectionfor Delay-Tolerant Networks
Authors : Erman AYDAY, Faramarz Fekri
Presented by : Mehmet Saglam
Department of Computer ScienceVirginia Polytechnic Institute and State UniversityNorthern Virginia Center, USA
Outline
Introduction
Iterative Trust and Reputation Management Mechanism (ITRM)
Trust Management and Adversary Detection in DTNs
Conclusion
Introduction
Sparseness and delay are particularly high Characterized by intermittent contacts between nodes,
leading to spacetime evolution of multihop paths (routes) for transmitting packets to the destination• i.e. DTNs’ links on an end-to-end path do not exist
contemporaneously Hence intermediate nodes may need to store, carry, and
wait for opportunities to transfer data packets toward their destinations
Delay Tolerant Networks (DTNs)
Application Areas;• Emergency response• Wildlife surveying• Vehicular to vehicular communications• Healthcare• Military• Tactical sensing• …
Introduction
Delay Tolerant Networks (DTNs)
The existence of end-to-end paths via contemporaneous links is assumed in spite of node mobility
If a path is disrupted due to mobility, the disruption is temporary and either the same path or an alternative one is restored very quickly
MANETs are special types of DTNs
Introduction
Mobile Ad hoc Networks (MANETs)
Problem of DTNs in packet communication; Routing, unicasting, broadcasting and multicasting become
sufficiently harder even with no packet erasures due to communication link
Reason; Lack of knowledge on the network topology, and the lack of
end to end path
Introduction
DTNs vs. MANETs
Byzantine Attack: One or more legitimate nodes have been compromised and fully controlled by the adversary. A Byzantine malicious node may mount the following attacks; Packet drop, in which the malicious node drops legitimate
packets to disrupt data availability Bogus packet injection, in which the Byzantine node injects
bogus packets to consume the resources of the network Noise injection, in which the malicious node changes the
integrity of legitimate packets
Introduction
Byzantine Adversary attacks against DTNs (1/3)
Routing attacks, in which the adversary tempers with the routing by misleading the nodes
Flooding attacks, in which the adversary keeps the communication channel busy to prevent legitimate traffic from reaching its destination
Impersonation attacks, in which the adversary impersonates the legitimate nodes to mislead the network
Introduction
Routing attacks are not significant threats for DTNs because of the lack of end-to-end path from a source to its destinationAttacks on packet integrity may be prevented using a robust authentication mechanism in DTNs
Byzantine Adversary attacks against DTNs (2/3)
However, packet drop is harder to contain because nodes’ cooperation is fundamental for the operation of DTNs
This paper focuses on packet drop attack which gives serious damages to the network in terms of data availability, latency, and throughput
Finally, Byzantine nodes may individually or in collaboration attack the security mechanism (e.g., the trust management and malicious node detection schemes)
Introduction
Byzantine Adversary attacks against DTNs (3/3)
In MANETs, reputation-based trust management systems are shown to be an effective way to cope with adversary
Trust plays a pivotal role for a node in choosing with which nodes it should cooperate, improving data availability in the network
Examining trust values has been shown to lead to the detection of malicious nodes in MANETs
Achieving the same for DTNs leads to additional challenges Constraints posed by DTNs make existing security protocols
inefficient or impractical
Introduction
Reputation-based trust management system in MANETs
Develop a security mechanism for DTNs To evaluate the nodes based on their behavior during their
past interactions To detect misbehavior due to Byzantine adversaries, selfish
nodes, and faulty nodes
This paper develops the Iterative Trust and Reputation Mechanism (ITRM), and explore its application on DTNs By proposing a distributed malicious node detection
mechanism for DTNs using ITRM ITRM enables every node to evaluate other nodes based on
their past behavior, without requiring a central authority
Introduction
Main objective of the paper
In MANETs, a node evaluates another by using either direct or indirect measurements. Building reputation values by direct measurement is either achieved by using the watchdog mechanism or by using the ACK from the destination
The use of indirect measurements to build reputation values is also allowed while the watchdog mechanism is used to obtain direct measurements
Reputation values are constructed using the ACK messages sent by the destination node.
Introduction
Related Work (1/4)
The techniques used in MANETs are not applicable to DTNs The watchdog mechanism cannot used to monitor another
node after forwarding the packets. Because, links on an end-to-end path do not exist contemporaneously and the node loses connection with the intermediate node which it desires to monitor
Relying on the ACK packets would fail, because of the lack of a fixed common multihop path
Using indirect measurements is possible. However, it is unclear as to how these measurements can be obtained
Introduction
Related Work (2/4)
Reputation systems for P2P networks are either not applicable for DTNs or they require excessive time to build the reputation values of the peers
The EigenTrust algorithm(most popular one) is constrained by the fact that trustworthiness of a peer (on its feedback) is equivalent to its reputation value
However, trusting a peer’s feedback and trusting a peer’s service quality are two different concepts
A malicious peer can attack the network protocol or thereputation management system independently. Therefore,the EigenTrust algorithm is not practical for DTNs
Introduction
Related Work (3/4)
The Cluster Filtering Method for reputation management introduces quadratic complexity while the computational complexity of ITRM is linear with the number of users in the network
Hence, ITRM scheme is more scalable and suitable for large scale reputation systems
Several other works have focused on securing DTNs by using Identity Based Cryptography and packet replication which provide confidentiality and authentication
On the other hand, ITRM provides malicious node detection and high data availability with low packet latency
Introduction
Related Work (4/4)
1. Computing the service quality (reputation) of the peers who provide a service by using the feedbacks from the peers who used the service (referred to as the raters)
2. Determining the trustworthiness of the raters by low packet latency analyzing their feedback about Service Providers
ITRM Mechanism
The Goals of ITRM
1. Bad mouthing, in which malicious raters collude and attack the SPs with the highest reputation by giving low ratings in order to undermine them
2. Ballot stuffing, in which malicious raters collude to increase the reputation values of peers with low reputations.
3. Sophisticated attacksa. Utilizes bad mouthing or ballot stuffing with a
strategy such as RepTrapb. Malicious raters provide both reliable and malicious
ratings to mislead the algorithm
Considered attacks against trust and reputation management systems
ITRM Mechanism
If a new rating arrives from the ith rater about the jth SP, the scheme updates the new value of the edge {i,j} by averaging the new rating and the old value of the edge multiplied with the fading factor
ITRM Mechanism
Global reputation of the jth SP
Rating that the peer i reports about the SP j, whenever a transaction is completed between the two peers
The trustworthiness of the ith peer as a rater
Age-factored (= )
Incorporates the time varying aspect of the reputation of the SPs (= where λ and are fading parameters
ITRM Mechanism
and are the values of the SP and the {i,j}th edge at the iteration v of the ITRM algorithm
=
- the set of all rater connected to the SP j
The list of malicious raters (blacklist) is empty
ITRM Mechanism
Initial Iteration
v =1Compute average inconsistency factor () of each rater i using the values of the SPs
- the set of SPs connected to the rater id(.,.) – distance metric used to measure the inconsistency
ITRM Mechanism
First Iteration (1/2)
ITRM Mechanism
First Iteration (2/2)
List the inconsistency factors of all raters in ascending order
Select and Blacklist the rater i with the highest inconsistency• if it is greater than or equal to a definite threshold τ
Delete the ratings of the blacklisted rater for all SPs
If there is no rater to blacklist, stop the algorithm
ITRM Mechanism
ITRM EXAMPLE
- Actual reputations are equal to 5- τ=0.7- s are equal to 1- s are equal- {1,2,3,4,5} honest- {6,7} malicious
values updated using the set of all past blacklists together in a Beta distribution. Initially, prior to the first time-slot, for each rater peer i, the value is set to 0.5
- Then, if the rater peer i is blacklisted, is decreased by setting
- Otherwise, is increased by setting
Where λ is the fading parameter and ᵟ denotes the penalty factor for the blacklisted raters.
Updating values via the Beta distribution has one major disadvantage.
An existing malicious rater with low could cancel its account and sign in with a new ID
Raters’ Trustworthiness
ITRM Mechanism
To prove that the general ITRM framework is a robust trust and reputation management mechanism, its security will be briefly evaluated by both analytically and via computer simulations
Then, the security of ITRM will be evaluated in a realistic DTN environment
Security Evaluation of ITRM
ITRM Mechanism
Frequently used notations
ITRM Mechanism
Assumed that• the quality of SPs remains unchanged during time slots• = 1 (for simplicity)• The evaluation is for Bad-mouthing attack only (others have
similar results) Ratings generated by the nonmalicious raters are distributed
uniformly among the SPs d is a random variable with Yule-Simon distribution, which
resembles the power-law distribution used in modeling online systems
Analytical Security Evaluation (1/3)
ITRM Mechanism
Lemma 1 : Let and be the number of unique raters for the jth SP and the total number of outgoing edges from an honest rater in t elapsed time slots, respectively. Let Q also be a random variable denoting the exponent of the fading parameter λ at the tth time slot. Then, ITRM would be aτ-eliminate-optimal scheme if the conditions
are satisfied at the tth time slot, where
and Λ is the index set of the set Γ
Analytical Security Evaluation (2/3)
ITRM Mechanism
The design parameter τ should be selected based on the highest fraction of malicious raters to be tolerated
We use a waiting time t such that (6a) and (6b) are satisfied with high probability
Then, among all τ values we select the highest τ value to minimize the probability of blacklisting a reliable rater
Analytical Security Evaluation (3/3)
ITRM Mechanism
Assumed that, there were already 200 raters and 50 SPs• 50 time slots have passed since the launch of the system• After this initialization process, 50 more SPs introduced• A fraction of the existing raters changed behavior (malicious)• By providing reliable ratings during the initialization period
the malicious raters increased their trustworthiness values Eventually, there are D+H=200 raters and N=100 SPs The performance of ITRM obtained, for each time slot, as
the Mean Absolute Error (MAE) (I- I)
Simulations (1/4)
ITRM Mechanism
Performance has evaluated in the presence of bad mouthing The victims are chosen among the newcomer SPs in order to
have the most adverse effect The malicious raters do not deviate very much from the
actual values to remain under cover Malicious raters apply a low intensity attack(the RepTrap
attack) by choosing the same set of SPs and rate them as n=4 By assuming that the ratings of the reliable raters deviate
from the actual reputation values, this attack scenario becomes even harder to detect than the RepTrap
Δ = /b = 1
Simulations (2/4)
ITRM Mechanism
Simulations (3/4)
ITRM Mechanism
Simulations (4/4)
ITRM Mechanism
- Although the malicious raters stay under cover when they attack with very less number of edges, this type of an attack limits the malicious raters’ ability to make a serious impact (they can only attack to a small number of SPs)
Adversary Models and Security Threats
Trust Management and Adversary Detection
Attack Types1) Attack on the network communication protocol2) Attack on the security mechanism
Packet drop and packet injection (type 1)• An insider adversary drops legitimate packets it has received• A malicious node may also generate its own flow to deliver
to another node via the legitimate nodes Bad mouthing (Ballot stuffing) on trust management (type2)• A malicious node may give incorrect feedback in order to
undermine the trust management system• Bad-mouthing attacks attempt to reduce the trust on a
victim node• Ballot-stuffing attacks boost trust value of a malicious ally
Adversary Models and Security Threats
Trust Management and Adversary Detection
Random attack on trust management (type 2)• A Byzantine node may adjust its packet drop rate (on the
scale of zero-to-one) to stay under cover Bad mouthing (Ballot stuffing) on detection scheme (type 2)• Every legitimate node creates its own trust entries in a table
(rating table) for a subset of network nodes for which the node has collected sufficient feedbacks
• Each node also collects rating tables from other nodes• When the Byzantine nodes transfer their tables to a
legitimate node, they may victimize the legitimate nodes or help their malicious allies
• This effectively reduces the detection performance of the system
Network/Communication Model and Technical Background
Trust Management and Adversary Detection
Random Waypoint (RWP) model produces exponentially decaying intercontact time distributions for the network nodes making the mobility analysis tractable
Each node is assigned an initial location in the fieldNodes travel at a constant speed to a randomly chosen destination. The speed is randomly chosen between min and max valueAfter reaching the destination, the node may pause for a random amount of time before the new destination and speed are chosen randomly for the next movement
Mobility Models (1/2)
Network/Communication Model and Technical Background
Trust Management and Adversary Detection
Levy-walk (LW) model is shown to produce power-law distributions that has been studied extensively for animal patterns and recently has been shown to be a promising model for human mobility
Each movement length and pause time distributions closely match truncated power-law distributions
Angles of movement are pulled from a uniform distribution
Mobility Models (2/2)
Network/Communication Model and Technical Background
Trust Management and Adversary Detection
Each packet contains its two hop history in its header• when node B receives a packet from node A, it learns
from which node A received that packet This mechanism is useful for the feedback mechanism
Packet Format
Routing and packet exchange protocolThe source node never transmits multiple copies of the same packetExchange of packets between two nodes follows a back-pressure policy•Assume nodes A and B have x and y packets belonging to the same flow f (where x > y). Then, if the contact duration permits, node A transfers (x-y)/2 packets to node B belonging to flow f
Iterative Detection for DTNs
Trust Management and Adversary Detection
In DTNs, due to intermittent contacts, a judge node has to wait for a very long time to issue its own ratings for all the nodes in the network
However, it is desirable to have a fresh estimate of the reputation in a timely manner, mitigating the effects of malicious nodes immediately
Present feedback ratings as (0-malicious) or (1-honest)
Iterative Detection for DTNs
Trust Management and Adversary Detection
Trust Management Scheme for DTNs (1/5)
Trust Management and Adversary Detection
The authentication mechanism for the packets generated by a specific source is provided by a Bloom filter and ID-based signature (IBS)
When a source node sends some packets, it creates a Bloom filter output and signs it using IBS
When an intermediate node forwards packets to its contact, it also forwards the signed Bloom filter output for authentication
The feedback mechanism to determine the entries in the rating table is based on a 3-hop loop
Trust Management and Adversary Detection
When B and C meet at , they first exchange signed time stamps
B sends the packets in its buffer Node B transfers the receipts it received thus far to C. Those
receipts include the proofs of node B’s deliveries C also gives a signed receipt to B The judge A and the witness C meet, they initially exchange
their contact histories. A learns that C has met B and requests the feedback
Trust Management Scheme for DTNs (2/5)
Trust Management and Adversary Detection
The feedback consists of two parts; receipts of B and the hashes of those packets for evaluation
The feedbacks from the witnesses are not trustable. Because of the bad mouthing (ballot stuffing) and random attacks
A judge node waits for a definite number of feedbacks to give its verdict
Each judge node uses the Beta distribution to aggregate multiple evaluations. If it is bigger than 0.5 the suspect is rated as “1”, otherwise it is rated as “0”
Trust Management Scheme for DTNs (3/5)
Trust Management and Adversary Detection
The sufficient number of feedbacks that is required to give a verdict with high confidence depends on the packet drop rate and detection level
The judge node applies the ITRM for the lowest possible detection level depending on the entries in both its own rating table and collected from other nodes
Assume a judge node M collected rating tables from other nodes K and V
The rating table entries with the largest detection level has a detection level of m, k, and v for M, K, and V ’s rating tables
Trust Management Scheme for DTNs (4/5)
Trust Management and Adversary Detection
M performs ITRM at the detection level of max(m,k,v) The malicious nodes may try to survive from the detection
mechanism by setting their packet drop rates to lower values The proposed detection mechanism eventually detects all
the malicious nodes when the judge node waits longer times to apply the ITRM at a lower detection level
Trust Management Scheme for DTNs (5/5)
Trust Management and Adversary Detection
The performance of ITRM compared with the well-known reputation management schemes (Bayesian and EigenTrust) in a realistic DTN environment.
RWP and LW mobility models used to evaluate the performance of the proposed scheme
Simulation area is fixed to 4.5kmx4.5km which includes N=100 nodes each with a transmission range of 250 m
is the intercontact time between two particular nodes Random variables x, y, and z represent the number of
feedbacks received at judge node A, total number of contacts that node B established after meeting A, and the number of distinct contacts of B after meeting A
Security Evaluations (1/9)
Trust Management and Adversary Detection
Lemma 2. Let be the time that a transaction occurred between a particular judge-suspect pair. Further, let be the number of feedbacks received by the judge for that particular suspect node since t= . Then, the probability that the judge node has at least M feedbacks about the suspect node from M distinct witnesses at time T + is given by
Security Evaluations (2/9)
Trust Management and Adversary Detection
Security Evaluations (3/9)
Trust Management and Adversary Detection
Lemma 3. Let a particular judge node start collecting feedbacksand generating its rating table at time t= . Further, let be the number of entries in the rating table of the judge node. Then, the probability that the judge node has at least s entriesat time + T is given by
Security Evaluations (4/9)
Trust Management and Adversary Detection
ITRM compared with the Bayesian reputation management framework and the EigenTrust algorithm
However, neither the original Bayesian framework nor EigenTrust is directly applicable to DTNs since both protocols rely on direct measurements which is not practical for DTNs
ITRM performs better than the Bayesian framework since Bayesian approaches assume that the reputation values of the nodes are independent
Hence, in these schemes, each reputation value is computed independent of the other nodes’ reputation
Security Evaluations (5/9)
Trust Management and Adversary Detection
The strength of ITRM stems from the fact that it tries to capture the correlation of probability distribution in analyzing the ratings and computing the reputations.
The EigenTrust algorithm is constrained by the fact that trust- worthiness of a peer is equivalent to its reputation value
However, trusting a peer’s feedback and trusting a peer’s service quality are two different concepts since a malicious peer can attack the network protocol or the reputation management system independently.
Therefore, ITRM also performs better than the EigenTrust algorithm
Security Evaluations (6/9)
Trust Management and Adversary Detection
Mean Absolute Error (MAE)
Security Evaluations (7/9)
Trust Management and Adversary Detection
Availability• Availability is the percentage of recovered messages at a
given time1)When there is no defense against the malicious nodes and each malicious node has a packet drop rate of 12)When a detection level of 0.8 is used by ITRM (in which each judge node is supposed to identify and isolate all the Byzantine nodes whose packet drop rates are 0.8 or higher)3)When a complete detection is used by ITRM (in which all malicious nodes are supposed to be detected and isolated regardless of their packet drop rate)4)When the Bayesian reputation management framework is used to detect the malicious nodes.
Security Evaluations (8/9)
Trust Management and Adversary Detection
Availability
Security Evaluations (9/9)
Conclusion
A robust & efficient security mechanism introduced for DTNsThe proposed security mechanism (ITRM) consists of a trust management mechanism and an iterative reputation management schemeThe trust management mechanism enables each network node to determine the trustworthiness of the nodesITRM takes the advantage of an iterative mechanism to detect and isolate the malicious nodes from the network in a short timeITRM is far more effective than the Bayesian framework and EigenTrust in computing the reputation valuesITRM provides high data availability with low information latency by detecting and isolating the malicious nodes