+ All Categories
Home > Documents > The Fable of the Bees: Incentivizing Robust Revocation...

The Fable of the Bees: Incentivizing Robust Revocation...

Date post: 10-Mar-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
12
The Fable of the Bees: Incentivizing Robust Revocation Decision Making in Ad Hoc Networks Steffen Reidt Royal Holloway, University of London [email protected] Mudhakar Srivatsa IBM T.J. Watson Research Center [email protected] Shane Balfe Royal Holloway, University of London [email protected] ABSTRACT In this paper we present a new key-revocation scheme for ad hoc network environments with the following characteristics: Distributed: Our scheme does not require a permanently available central authority. Active: Our scheme incentivizes rational (selfish but honest) nodes to revoke malicious nodes. Robust: Our scheme is resilient against large numbers of colluding malicious nodes (30% of the network for a detection error rate of 15%). Detection error tolerant: Revocation decisions funda- mentally rely on intrusion detection systems (IDS). Our scheme is active for any meaningful IDS (IDS error rate < 0.5) and robust for an IDS error rate of up to 29%. Several schemes in the literature have two of the above four characteristics (characteristic four is typically not explored). This work is the first to possess all four, making our revoca- tion scheme well-suited for environments such as ad hoc net- works, which are very dynamic, have significant bandwidth- constraints, and where many nodes must operate under the continual threat of compromise. Categories and Subject Descriptors C.2.0 [General]: Security and protection General Terms Security Keywords Partially Available, Trust Authority, Revocation, Incentive, Game, Reward, Bees, Suicide Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CCS’09, November 9–13, 2009, Chicago, Illinois, USA. Copyright 2009 ACM 978-1-60558-352-5/09/11 ...$10.00. 1. INTRODUCTION Key revocation is the key management operation concerned with enforcing limitations on a key’s use [11]. In Mobile Ad hoc NETworks (MANETs) compromised nodes can di- vert and monitor traffic, influence quorum-based decisions or spread harmful information. To limit the damage caused by such nodes, agile revocation schemes that allow rapid im- peachment of malicious nodes are vital for the security of the network. In contrast to wired networks, revocation in MANETs must operate in a completely distributed fashion. The lack of a permanently available, global monitoring authority within a MANET, coupled with the need to quickly react to per- ceived threats, necessitates that resource-constrained nodes be granted the operational freedom to make risk-based key revocation decisions without direct contact with a central net- work authority. A node, or group of nodes, presented with possibly incomplete (and/or inaccurate) evidence of malicious behavior, must (based on probabilistic results of their intru- sion detection systems) make a decision as to whether to insti- gate a key revocation operation against another node. Once a decision to revoke has been reached, a revocation message will be formulated and either distributed locally within a node’s neighborhood or flooded throughout the entire network. To-date, one of the most widely cited methods for achieving revocation in MANETs has been the use of quorum-based de- cision making using k-out-of-n threshold signatures [3, 9, 23]. In these schemes, nodes accuse other nodes of malicious be- havior by casting negative votes against a perceived offender. Once a predetermined threshold k + 1 of negative votes is achieved, a signature can be reconstructed and the offend- ing node will be considered revoked by other members of the network. Setting this threshold parameter high, whilst intu- itively an astute security decision, may inadvertently result in a malicious node never being revoked (as the network density may not support the required level of collaboration). Setting it too low may result in a malicious adversary compromising a relatively small fraction of the total number of nodes and gaining control of the network by being able to revoke at will [3]. To avoid the shortcomings of quorum-based revocation, the concept of node suicide was recently introduced by Clulow et al. [5]. Motivated by the observation that many biological systems exhibit behavior in which individual members of a group are willing to sacrifice themselves to protect the collec- tive (e.g. honeybees sting in response to a perceived threat against the hive), their scheme proposes that a single node 291
Transcript
Page 1: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

The Fable of the Bees:Incentivizing Robust Revocation Decision Making

in Ad Hoc Networks

Steffen ReidtRoyal Holloway,

University of [email protected]

Mudhakar SrivatsaIBM T.J. WatsonResearch Center

[email protected]

Shane BalfeRoyal Holloway,

University of [email protected]

ABSTRACTIn this paper we present a new key-revocation scheme for adhoc network environments with the following characteristics:

• Distributed: Our scheme does not require a permanentlyavailable central authority.

• Active: Our scheme incentivizes rational (selfish buthonest) nodes to revoke malicious nodes.

• Robust: Our scheme is resilient against large numbersof colluding malicious nodes (30% of the network for adetection error rate of 15%).

• Detection error tolerant: Revocation decisions funda-mentally rely on intrusion detection systems (IDS). Ourscheme is active for any meaningful IDS (IDS error rate< 0.5) and robust for an IDS error rate of up to 29%.

Several schemes in the literature have two of the above fourcharacteristics (characteristic four is typically not explored).This work is the first to possess all four, making our revoca-tion scheme well-suited for environments such as ad hoc net-works, which are very dynamic, have significant bandwidth-constraints, and where many nodes must operate under thecontinual threat of compromise.

Categories and Subject DescriptorsC.2.0 [General]: Security and protection

General TermsSecurity

KeywordsPartially Available, Trust Authority, Revocation, Incentive,Game, Reward, Bees, Suicide

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.CCS’09, November 9–13, 2009, Chicago, Illinois, USA.Copyright 2009 ACM 978-1-60558-352-5/09/11 ...$10.00.

1. INTRODUCTIONKey revocation is the key management operation concerned

with enforcing limitations on a key’s use [11]. In MobileAd hoc NETworks (MANETs) compromised nodes can di-vert and monitor traffic, influence quorum-based decisions orspread harmful information. To limit the damage caused bysuch nodes, agile revocation schemes that allow rapid im-peachment of malicious nodes are vital for the security of thenetwork.

In contrast to wired networks, revocation in MANETs mustoperate in a completely distributed fashion. The lack of apermanently available, global monitoring authority withina MANET, coupled with the need to quickly react to per-ceived threats, necessitates that resource-constrained nodesbe granted the operational freedom to make risk-based keyrevocation decisions without direct contact with a central net-work authority. A node, or group of nodes, presented withpossibly incomplete (and/or inaccurate) evidence of maliciousbehavior, must (based on probabilistic results of their intru-sion detection systems) make a decision as to whether to insti-gate a key revocation operation against another node. Once adecision to revoke has been reached, a revocation message willbe formulated and either distributed locally within a node’sneighborhood or flooded throughout the entire network.

To-date, one of the most widely cited methods for achievingrevocation in MANETs has been the use of quorum-based de-cision making using k-out-of-n threshold signatures [3, 9, 23].In these schemes, nodes accuse other nodes of malicious be-havior by casting negative votes against a perceived offender.Once a predetermined threshold k + 1 of negative votes isachieved, a signature can be reconstructed and the offend-ing node will be considered revoked by other members of thenetwork. Setting this threshold parameter high, whilst intu-itively an astute security decision, may inadvertently result ina malicious node never being revoked (as the network densitymay not support the required level of collaboration). Settingit too low may result in a malicious adversary compromisinga relatively small fraction of the total number of nodes andgaining control of the network by being able to revoke at will[3].

To avoid the shortcomings of quorum-based revocation, theconcept of node suicide was recently introduced by Clulow etal. [5]. Motivated by the observation that many biologicalsystems exhibit behavior in which individual members of agroup are willing to sacrifice themselves to protect the collec-tive (e.g. honeybees sting in response to a perceived threatagainst the hive), their scheme proposes that a single node

291

Page 2: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

can unilaterally revoke another node at the cost of being re-voked itself. Unfortunately, for the type of heterogeneous,coalition networks envisaged in future military or emergencyresponse scenarios [25], it may be unreasonable to assumethat each node will value the network’s utility more than itsown. Without sufficient incentive, selfish1 (rational) nodeswill always defer revocation responsibility to others. This inturn may result in malicious nodes never being revoked, aswas shown in [23].

It is this observation that leads us to Mandeville’s (in)fa-mous work, “The Fable of the Bees” [17]. It was Mandeville’sthesis that altruism doesn’t truly exist in a society. Insteadit is self-interest (and all its attendant vices) that dominatesbehavioral norms. However, through skillful and efficaciousmanagement of the individual’s selfish desires, public benefitmay emerge. It is this view of network collaboration that weadopt in this paper.

In this paper, we present a new revocation scheme for adhoc networks which we call karmic-suicide. Our scheme in-herits the attractive properties of suicide-based revocation(immediacy and abuse resistance) but overcomes the disin-centive to sacrifice utility for collective gain. To incentivizenodes to commit suicide, a periodically available Trust Au-thority2 (TA) rewards a node for a justified suicide by rein-carnating (reactivating) and thus rewarding the node for hisactions. If, however, our judgment system is unable to ascer-tain (to a satisfactory degree) whether a suicide was justified,it can reactivate both parties but give no reward or engage inremedial action with one or more of the parties (possibly byresetting their configuration). To support this function, wedevelop a judgment mechanism that can be used by our TA toenable it to make probabilisticly correct decisions by posthu-mously interrogating neighborhood nodes who witnessed (theevents leading to) the suicide. Using a k-means clusteringalgorithm, we derive bounds on the difference in behavior toeffectively partition malicious nodes from honest ones and de-rive globally optimal strategies for colluding, malicious nodesattempting to abuse our judgment mechanism. In doing so,we highlight the interplay between TA-level judgments andnode-level Intrusion Detection Systems (IDSs) and establishlower bounds on the required accuracy of our judgment sys-tem for which an adversary cannot abuse our scheme for itsown benefit.

We show that our judgment system is secure (cannot beabused by an adversary) for node-level IDS error rates of10,15,20,25%, if the ratio of malicious to honest nodes is atmost 38,31,22,11%, respectively. We furthermore investigatethe relationship between the IDS error rate and the density ofthe network in determining the agility (how quickly maliciousnodes can be removed) of our revocation process using game-theoretic analysis. Our analysis shows how both smaller IDSerrors and a greater network density yield an accelerated re-vocation process, resulting in a more resilient and reliablenetwork free from undesirable nodes.

1Whilst nodes themselves are not capable of higher cognitiveprocessing, we assume nodes are programmed to to maximizetheir personal utility (or the utility of their group) over a setof constraints.2The presence of a TA to assist in this type of a posterioridecision review is often ignored in the MANET revocationliterature. This is in stark contrast to the large body of workthat has emerged around the requirements of a key distribu-tion authority within a MANET [32, 30, 16, 15].

This paper is organized as follows. In Section 2 we reviewrelated work in the area of revocation in MANETs. In Sec-tion 3 we present the requirements of our revocation scheme,outline our assumptions and present our adversary model.In Section 4 we give an overview of our scheme and derivebounds on the judgment mechanism’s accuracy. In Section 5we present our judgment mechanism and investigate for whichIDS error rates our lower bounds can be met. In Section 6we use a game-theoretic analysis to show that our schemeprovides selfish, but honest, nodes with sufficient incentive torevoke malicious nodes whilst disincentivizing colluding mali-cious nodes from abusing our scheme. We conclude this paperwith Section 7.

2. RELATED WORKThe process of arriving at a revocation decision is the pri-

mary focus of the majority of revocation schemes presentedto-date in the ad hoc networking literature [1, 9, 22, 6, 16, 13,28, 20, 23, 30, 3, 14, 9, 7, 4, 27]. Assuming that a node hasamassed sufficient (read perfect) evidence, various approacheshave been introduced that require differing amounts of partic-ipation from other nodes in the network. That is, revocationdecision making may be the result of a collaborative, systemicor a unilateral decision process.

In collaborative schemes, nodes accuse other nodes of mis-behaving by casting negative votes against them. If a prede-termined threshold of negative votes are cast, then the offend-ing node is considered revoked. By contrast, systemic revo-cation decision making has been proposed for use in Identity-based Public Key Infrastructures (ID-PKIs) for ad hoc net-works [9, 18, 31]. As part of an ID-PKI, a validity period canbe expressed in deriving a node’s identifier. Consequently, anode’s identity will only be valid for a pre-determined periodas specified by a Trust Authority (TA) with administrativeresponsibility for the network. Once a node’s identifier ex-pires, the node must contact its (possibly distributed) TAand request a new private key, with a new expiry time. TheTA in turn can decide whether to issue new keys during thisre-enrollment process. In systemic-based decision making thefrequency of renewal (the longevity of an expiry period) isan important parameter as the higher the turnover, the lessimpact a compromised key may have on the network, but thegreater is the effort that must be expended on key issuanceprocedures. This approach requires an on-line TA and maysignificantly increase traffic if refreshing is frequent [16].

The concept of unilateral decision making as a method ofrevocation was first introduced by Rivest in dealing with keycompromise [24] in Public Key Infrastructures (PKIs). Auser, upon detecting that their key has been exposed, de-clares their key invalid by issuing a signed message using thecompromised key (indicating that this key is no longer to betrusted). This notion of suicide has recently been extended foruse in ad hoc networks where a node, upon detecting somemalicious behavior, can instigate a “suicide-bombing” on a(perceived) malicious node [5, 27, 20, 23]. A node commitssuicide by broadcasting a signed instruction to revoke bothits own key and the key of the misbehaving node. Suicideas a method of revocation in ad hoc networks has a numberof attractive features when compared with collaborative andsystemic decision making. With suicide, nodes can act im-mediately to a perceived threat. Additionally, suicide, as amethod of revocation, is resistant to abuse due the to highcost associated with revoking another node.

292

Page 3: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

The suicide schemes above rely upon a node valuing thenetwork’s utility more than its own utility as an incentiveto suicide. However, Raya et al. [23] recently showed thatunder such assumptions, nodes tended to defer suicide to an-other node rather than revoke a malicious node themselves.To induce a node to suicide Raya et al. introduce a “socialcost” as a means of incentivizing suicide. However, Raya etal. give no firm definition of what this social cost is or how itwould be introduced, nor do they provide an adversary modelagainst which their scheme can be shown resilient. It is ourthesis, that a “social cost” cannot provide enough incentivefor a rational node to commit suicide. To this end, we in-troduce the notion of karmic-suicide, where we, in contrastto all other revocation schemes, incorporate both false posi-tives and false negatives in the underlying node-level intrusiondetection mechanisms. This in turn impacts the ability to ef-fectively reward/punish nodes and we show that our scheme:1) gives honest nodes sufficient incentive to revoke maliciousnodes and 2) is resilient to abuse for large numbers of collud-ing malicious nodes.

3. PROBLEM DEFINITIONIn this section we outline the design requirements that our

karmic-suicide scheme must satisfy, describe the assumptionsthat we make about our scheme and outline our adversarymodel.

3.1 Design RequirementsDistributed: We require that a network authority responsi-ble for the administration of the network is only periodicallyavailable and consequently incapable of monitoring the oper-ational minutiae of the network.Active: We require a revocation mechanism that encouragesselfish nodes to commit suicide for the good of the network.That is, nodes should be rewarded for sacrificing a short-termloss in utility in favor of longer-term gains.Robustness: We require a scheme that is robust against alarge number of cooperating, malicious nodes.Detection Error Tolerant: We require a scheme that is ca-pable of handling realistic errors in the underlying node-levelIDS in detecting malicious behavior.Agility: We require a scheme that is capable of reactingquickly to node misbehavior.Scalability: We require a scheme that works irrespective ofthe size and density of the network.

3.2 AssumptionsNode Identifiers: We assume that nodes can have severalidentifiers with corresponding private keys (this could be theresult of either using a public key infrastructure with one ormore public key certificates, or using an identity-based keyinfrastructure with one or more identifier/private key pairs,per node). Should one of these identifiers be revoked, all ben-efits that have been accrued by this identifier over its lifetimewill be lost. Consequently, if the identifier of a node is re-voked, and it has one or more redundant identifiers, it canstill operate but must start building reputation for this iden-tifier anew. By default all nodes have a single identifier.Selfishness: We assume that all honest nodes behave self-ishly and want to maximize their own utility over some (in)fi-nite horizon.Network Longevity: We assume that the network will bein operation for a non-negligible period of time.

Intrusion Detection: We assume that individual nodes arerunning intrusion detection systems to provide a node with anindication of malicious behavior on the part of other nodes.This assumption is implicit in all ad hoc network revocationschemes to date.Network Infrastructure: We assume the presence of a net-work infrastructure, that facilitates nodes to sign messagesand to send authenticated messages.

3.3 Adversary ModelKey revocation protocols are carried out in the presence

of active adversaries. It is the goal of the adversary in ourmodel to maximize his overall influence during the lifetimeof the network. The influence of our adversary is measuredby the sum of total time periods that each malicious nodeattacks the network. The malicious nodes will therefore tryto maximize their attack intensity while considering the riskof being detected and revoked as a result of their negativebehavior. The adversary can improve his position by gettingmore identifiers with corresponding private keys. Thus, theadversary will attempt to abuse the revocation scheme to ob-tain more identifiers. Our adversary will pool the resourcesof his nodes, possibly creating wormholes between nodes inan attempt to overwhelm any quorum-based mechanisms in-cluding those that attempt to derive the threshold as a frac-tion of the number of nodes in their neighborhood [1]. Weassume that nodes in our network may be compromised atany time during the network’s operation but that the adver-sary always has less than 50 percent of the total number ofidentifiers (with corresponding private keys) available in thenetwork.

4. REVOCATION BY KARMIC-SUICIDETo overcome the barrier of selfishness in suicide-based revo-

cation schemes, we propose our scheme that is motivated bya macabre real-life observation: a belief in “afterlife” can bean incentive to sacrifice oneself if there is a sufficient promiseof reward. To this end we present our karmic-suicide scheme.

4.1 Protocol and Parameter SpecificationPrior to engaging in karmic-suicide, nodes monitor their

neighborhoods using IDS to collect evidence of malicious be-havior. What constitutes malicious behavior has evolved sig-nificantly over the years, from originally dealing exclusivelywith non-forwarding of packets or wormhole attacks [10], torecent application layer considerations such as disseminatingfraudulent information [20]. However, irrespective of whatbehavior the IDS is configured to monitor for, it can only pro-vide probabilistic results about another node’s (mis)behavior.Many revocation schemes found in the literature to-date make(incorrect) simplifying assumptions that the detection of un-desirable behavior is perfect and/or that malicious nodes areuncoordinated [20]. As part of our scheme we assume all ma-licious nodes collude and we incorporate the (in)accuracy ofthe IDS in our analysis.

We assume that the output of the IDS of node i about anode j is a normalized score 0 ≤ idsij ≤ 1, where 0.5 means aneutral score, and a higher score accounts for honesty. Eachnode derives an opinion oij from this score. For the sake ofsimplicity of our analysis, we assume that oij ∈ {−1, 0, 1},where 1 denotes a highly positive opinion, -1 denotes a highlynegative opinion and 0 denotes no or a neutral opinion. Tothis end, we set oij = −1 if idsij < 0.5, oij = 0 if idsij = 0.5

293

Page 4: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

and oij = 1 if idsij > 0.5. The probabilities of the IDS fora false positive and a false negative are ep := Pr(oij = 1 | jis bad) and en := Pr(oij = -1 | j is good), respectively. Forour analysis of the judgement system in Section 5 we combinefalse positives and negatives to the (average) IDS error rate e:= Pr(oij = -1 | j is good) · Pr(j is good) + Pr(oij = 1 | j isbad) · Pr(j is bad). In our game-theoretic analysis in Section6 only en will be relevant. We model the network to run inrounds r, i.e. each round r represents a fix time interval of thenetwork lifetime. As nodes collect more evidence about othernodes from round to round, the confidence in the accuracyof the IDS increases, i.e. e decreases; e can be modeled as afunction e(r) that is monotonically decreasing. We note thatthe increasing confidence in the accuracy of the IDS might notbe justified if a node is surrounded by malicious nodes thatspoof the evidence about their own or other nodes’ behaviour.Such situations account for the probability that nodes or thejudgement system make a wrong decision. However, assum-ing a global majority of benign nodes in the network, ouranalysis shows that our scheme works in favour of the goodnodes in the network. We label the number of identifiers (andcorresponding private keys) that the good node’s hold by n,and by m for the malicious nodes, respectively.

In response to the detection of malicious behavior, a nodei issues (and signs) a suicide note when it judges its amassedevidence to be sufficient to revoke another node j. This noterepresents an instruction to the network to rescind any privi-leges associated with the keys of both parties (the revocationinstigator and the revocation target). Once this note is is-sued, the node waits for a TA to become available (either viaa back-link to an off-site location or the TA physically entersthe communication rage of a suiciding node) and forwards thesuicide note to the TA. Once received, the TA can request fur-ther opinion values about i and j, randomly choosing nodesin i’s and j’s neighborhood as witnesses3.

From all collected opinions, the TA needs to assess thepotentially incomplete and contradictory evidence and makea ruling. The more nodes’ opinions are collected, the higheris the probability that the TA makes the right judgment. Incase the TA remains uncertain about the justifiability of thesuicide, it can annul the suicide without giving any reward orpunishment. Alternatively, the TA could potentially engagein remedial action with the node(s) and reset their softwarevia an update if the TA believes the suicides to be as a resultof misconfiguration. We do not, however, consider this issueof reconfiguration further in this paper.

If, the TA ultimately deems the suicide justifiable, the TArewards the revoking node. Table 1 shows the probabilityparameters for the TA’s possible decisions, wherept, pa, pf , qt, qa, qf ≥ 0, pt + pa + pf = 1 and qt + qa + qf = 1.

Table 1: Decision probabilities.

TA’s Judgment:good undecided bad

Reality: good pt pa pf

bad qf qa qt

3The use of random witness has previously been studied inlimiting node replication attacks in ad hoc networks [21]. Theuse of randomization here probabilistically limits the poten-tial of adversarial collusion.

4.2 Costs and BenefitsWe represent the cost for a node to engage in karmic-suicide

(either as the revoker or revoked) as the loss of a single keyfor its identifier, i.e. a cost of 1. We set the benefit forresurrection to 1+b, 0 < b ≤ 1, i.e. a node gets a higher bonusfor resurrection than it pays for revoking a node by suicide.b is a scalable parameter, that allows us to control the agilityof the revocation scheme in the network. Obviously, b shouldbe chosen greater than 0 to give honest nodes an incentive torevoke. However, to prevent a malicious node from abusingour revocation scheme to increase their overall position byrevoking other malicious nodes, we place an upper-bound of1 for b. Note, that unlike in real life, there exist no suicidebombing where several good nodes are “killed”, also a suicidein our network has no further “social” consequences such asdeterrence. As a convenient reward for a justified suicidein our scheme, a node is given an additional identifier withcorresponding private key (giving a value of b = 1). For thepurposes of our analysis, we leave b as a scalable parameter asa different reward might be used to implement our revocationscheme in different networks.

The expected benefits for a node depend on the correct-ness of our TA’s judgment ability. An honest might revoke amalicious node, but the amassed corroborating opinions fromthe neighboring nodes brings the TA to the opposite decision.Likewise, an honest node might revoke another honest as a re-sult of a false positive in its IDS, but mistakenly be rewardedby the TA. We summarize the expected costs and benefits toprofit (= benefit − cost). Based on the probabilities for theTA’s accuracy given in Table 1, we show the expected profitsfor all possible strategies of honest and malicious nodes inTable 2. If an honest or malicious node do not revoke, theprofits on both sides are 0. If an honest node mistakenly re-vokes another honest node, then, if the TA makes the rightdecision (with prob. pt), the honest node suffers a cost of 1 asit will not be resurrected. However, with a probability of pf

it will be mistakenly rewarded and obtain an overall profit ofb (1 + b reward and 1 cost for the suicide). The attacker willbenefit from this event as there is one less honest node in thenetwork. If an honest node revokes a malicious node, the costis 1, but the benefit from the resurrection is 1 + b, resultingin a profit of b. The node obtains this benefit b with a proba-bility of qt, but gets no benefit with a probability of qf if theTA makes the wrong decision. The ratio of malicious nodesis degraded and thus the profit of the adversary is < 0. If amalicious node revokes an honest node, the honest will retainits key and have no cost if the TA makes the right decision.If the TA, however, decides that revoking the honest nodewas justified (with prob. pf ), then the good node looses itskey and has a profit of −1. If the TA rewards the maliciousnode for revoking the honest node by suicide (with prob. pf )it brings a benefit of b to the adversary and a cost of 1 tothe good node. Counting the malicious node’s global profitas the ratio of malicious to honest nodes’ identifiers, this willincrease their expected ratio by (m+b

n−1− m

n)4 If the TA, how-

ever, makes the correct decision (with prob. pt), the revokedhonest node gets resurrected and the malicious node obtainsno benefit. This decreases the ratio of malicious nodes by

4You can think of the ratio of honest to malicious nodes asthe ratio of keys held by the honest and malicious nodes.).If the reward is a single identifier/key, then b = 1 and theincrease is (m+1

n−1− m

n).

294

Page 5: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

Table 2: Profit for revocation events. These profits represent an honest node’s local view of the network andthe adversary’s global view, respectively. From a global view, we model the malicious node’s strength in thenetwork as the ratio of identifiers and corresponding private keys held by malicious nodes and good nodes:

m/n. (x!> 0 means that x is supposed to be greater than 0.)

Event profit honest node profit adversary

honest node does nothing 0 0

honest node revokes honest node 1) bpf − pt

!< 0 > 0

honest node revokes malicious node 2) bqt − qf

!> 0 < 0

malicious node does nothing 0 0

malicious node revokes honest node −pf 3)“

m+bn−1

− mn

”pf − `

mn− m−1

n

´pt

!< 0

malicious node revokes malicious node 0 4) < 0

(mn

− m−1n

). If a malicious node revokes another maliciousnode and the TA makes the right decision (with prob. qt),the adversary looses two keys at a cost of 2, but gains a re-ward of 1 + b. If the malicious node is not rewarded, as aconsequence of an incorrect decision of the TA (with prob.qf ), the revoked node gets its identifier back, but the mali-cious node obtains no reward. The cost for the adversary inthis case is 1. In both cases the profit for the adversary isnegative.

4.3 Requirements for the Judgment SystemIn Table 2 we have listed the expected costs for the possible

actions of malicious and honest nodes, which depend on theaverage accuracy of the TA’s judgment facility as listed inTable 1. For our scheme to work, there should be 1) noincentive for honest but selfish nodes to revoke other honestnodes for their own benefit, 2) honest nodes should have anincentive to revoke malicious nodes, 3) there is no incentivefor malicious nodes to revoke honest nodes and 4) the attackercannot improve its overall position by letting malicious nodesrevoke each other. These sufficient conditions are met if therequirements in Table 3 for the minimum accuracy of ourjudgement system are satisfied. The proof can be found inAppendix A, for b = 1 where the conditions are met if, andonly if, these requirements hold.

Table 3: Requirements for the judgment system.

condition (Tab. 2) derived requirement2) (A) qt > 1

bqf

3) (includes 1)) (B) pt >`

nn−1

´`1 + m

n

´pf

4) no requirement

Consequently, the resulting requirements for our TA’s accu-racy are that qt > 1

bqf (to give honest nodes positive incentive

to revoke) and pt >`

nn−1

´`1 + m

n

´pf (to prevent malicious

nodes from abusing the scheme for their own benefit).

5. THE TA’S JUDGEMENT MECHANISMIn this section we introduce our judgment mechanism that

combines opinions from multiple nodes and labels a revokednode j as being good, bad or undecided. The judgment sys-tem takes as input a matrix onode×node, where -1 ≤ oij ≤ 1denotes the opinion of node i on node j as defined in Section4.1. In this section we first assume that all the opinions areerror free (namely, the IDS system is perfect). We will then

extend our analysis to accommodate false positives and falsenegatives in the IDS system later in this section.

5.1 Judgment System SetupThe judgment system uses an off-the-shelf k-means cluster-

ing algorithm [12] to partition the set of nodes into two clus-ters G and B (the good and the bad respectively 5 ) and a N−1dimensional hyper-plane P that maximally separates the twoclusters. Each node i has an opinion vector (oi1, . . . , oi(n+m))about nodes 1, . . . , n + m and the opinions about a node jare associated with the vector oj = (o1j , . . . o(n+m)j). Ini-tially nodes have no opinions about other nodes, but duringthe lifetime of the MANET nodes will partially or even com-pletely fill their opinion vector with non-zero entries. Wedefine the difference in opinions about nodes i and j to bePn+m

k=1 |oki − okj |. The key intuition here is that the goodnodes will typically be close to one another in this vectorspace and will thus be clustered together; while the bad nodeswill typically be far from the cluster of good nodes. Thejudgment system draws conclusions on a node j based on asystem-defined threshold thr as follows. We use d(oi, P ) todenote the distance of vector oj from the plane P .

decision =

8><>:

good j ∈ G ∧ d(oj , P ) > thr

bad j ∈ B ∧ d(oj , P ) > thr

undecided d(oj , P ) ≤ thr

We assume that each bad node misbehaves with a ratio ofα, uniformly and randomly distributed over the life-time ofthe network. The bad nodes strategically choose α in order tomaximize their expected long term profit. We note that thehigher the attack intensity α, the higher is qt (see Table 1),and thus the bad node’s risk of being revoked. The probabil-ity qt for correctly categorising a node as bad, can thereforebe interpreted as a function qt(α). The probability that a badnode is not categorized as bad by the judgment system afterr rounds is (1− qt(α))r. At each round r, the profit for a badnode is assumed to be proportional to its attack intensity α,but degrades over time such that the expected profit gainedat round r is α · λr · (1 − qt(α))r. 0 < λ < 1 is a discountfactor that provides a practically finite time horizon, sincelimr→∞ λr = 0. Discount factor based model is a commonapproximation for finite horizon problems in decision theory

5Assuming that the number of good nodes is larger than thenumber of bad nodes, the larger of the two clusters is consid-ered ‘good’.

295

Page 6: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

Figure 1: Reward and attack in-tensity.

Figure 2: Optimal attack inten-sity.

Figure 3: Optimal expected re-ward.

Figure 4: Good node decisionprobabilities.

Figure 5: Bad node decision prob-abilities.

[29] and has been used in security domains to model the util-ity for denial of service attacks [19]. Hence, the total profitof a node over the network life-time is:R =

P∞r=1 α · λr−1 · (1 − qt(α))r−1 = α

1−λ(1−qt(α))

We note that at low attack intensity, the bad nodes behavesimilar to good nodes; hence, the smaller the attack intensityα, the harder it becomes to distinguish the bad nodes fromthe good nodes. Fortunately, as the attack intensity tends tozero, so does the expected profit R.

We assume that the good nodes report honest opinionsabout other nodes in the network. However, the bad nodesmanipulate their opinions with the goal of making their opin-ion vector similar to that of the good nodes. Formally, wepostulate that the goal of the bad nodes is to minimize thedifference in the average distance between a bad node andgoods node to that of the average distance between goodnodes. One can show that the optimal strategy for a badnode j is to choose oij uniformly and randomly from theset {-1, 1}. Let us suppose that a bad node picks oij = 1with probability β; and -1 with probability 1− β. Under thisstrategy, the average distance between any two good nodes is4β(1 − β)m and average distance between a bad node and agood node is 2α · n + 4β(1 − β)m. Now, the optimal choice

of β should minimize the ratio 2α·n+4β(1−β)m4β(1−β)m

. Equivalently,

the goal is to maximize β(1 − β); hence β∗ = 12.

Under the optimal setting β∗, the average distance betweentwo good nodes is 2α · n and that between a good and a badnode is 2α ·n + 2 ·m. Evidently, the clustering algorithm maybe ineffective in partitioning the good and the bad nodes intoseparate clusters if α · n

mis very small. However, for α · n

mto

be small, either the attack intensity α must be small (thus,decreasing the expected profit R) or the ratio m

nis large.

5.2 Analysis Based on Perfect IDSFrom the analysis shown in Figures 1 to 5, it is evident

that the optimal attack intensity α∗ depends on the ratio mn

.Figure 1 shows the expected profit R as a function of attack

intensity α for a fixed ratio mn

= 0.5. The figure shows thatthe detection probability qt increases monotonically with theattack intensity α. The expected profit R initially increasesreaching an optimal R∗ at some optimal attack intensity α∗;the subsequent decrease in expected profit is attributed to ahigher detection probability (qt) that reduces the expectedlifetime (time to revoke) of a bad node.

Figures 2 and 3 show the optimal attack intensity and ex-pected profit as we vary the ratio m

n. As the ratio m

nincreases,

the bad nodes can afford to increase their attack intensity andyet tweak the opinion matrix in a way that makes it harderfor the judgment system to distinguish the bad nodes fromthe good nodes. Consequently, the overall expected profit asshown in Figure 3 increases dramatically with the ratio m

n;

not only the absolute number of bad nodes increases with theratio, but also each bad node uses a higher attack intensityin order to maximize its expected profit.

Figures 4 and 5 show the optimal decision probabilitiesp∗

t , p∗f , p∗

a, q∗t , q∗f and q∗a achieved under optimal attack in-tensity for varying ratio of m

n. We note that the numbers

shown in Figures 4 and 5 have been obtained from runningour classification algorithm assuming that the bad nodes havestrategically (optimally) chosen their opinions. We also notethat the decision probabilities shown in these figures are madeat steady state; for the judgment system to make meaningfuldecisions the opinion matrix must hold sufficiently dense pos-itive and negative opinions. Indeed at time t = 0 all opinionvalues oij are 0 and the judgment system will output unde-cided.

Figures 4 and 5 show that as the ratio mn

increases, theprobability of misclassifying both the good and the bad nodesincreases. Recall the constraints on decision probabilities de-

rived in Section 4: pt >“

nn−1

” `1 + m

n

´pf is required to pre-

vent an attacker from abusing the scheme to its own benefit,and qt > 1

bqf (with b = 1) is required to provide positive in-

centive for honest nodes to revoke. Figure 4 shows that for aworst case ratio of m

n= 1, pt = 0.61 and pf = 0.31, such that

296

Page 7: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

Figure 6: Maximum tolerable ra-tio of m/n to keep the adversaryfrom abusing the scheme.

Figure 7: Good node decisionprobabilities with IDS error rateof 0.15.

Figure 8: Bad node decision prob-abilities with IDS error rate of0.15.

the first requirement can be satisfied by the judgment systemnearly exactly to the ratio m

n= 1. As Figure 5 shows the

second constraint is even satisfied for m < 1.5n. We remarkthat in general the constraint m < n applies to any unsuper-vised (unbiased) judgment system. Hence, both requirementsto give enough incentive and to avoid abusing the scheme aresatisfied by the judgment system for m < n, assuming thatthe IDS works without errors.

5.3 Analysis Considering an IDS Error RateWe have so far assumed that the opinions of good nodes are

error free. Let us suppose that the average error probabilityin the IDS system is e as introduced in Section 4.1. Usingthe same arguments as above, one can show that the averagedistance between two good nodes is 4n·e(1−e) + 2m and thatbetween a good node and a bad node is 2n·(e(1−α)+α(1−e))+ 2m. We recognize that for the judgment system to beeffective we need the average distance between a good nodeand a bad node to be larger than that of two good nodes:2n · (e(1−α)+α(1− e)) > 4n ·e(1− e); equivalently, we need(α−e)(1−2e) > 0 ⇒ α > e (in any meaningful IDS system e< 1

2). Hence, if the attack intensity α is smaller than the IDS

error rate e, then the attack is virtually undetectable. Settingα = e keeps the bad nodes largely undetectable (namely, qt(α)= 0) resulting in an optimal long term profit of R = e

1−λ.

Since e is small, the profit is significantly limited.For any optimal decision probability r∗ ∈ {p∗

t , p∗a, p∗

f , q∗f ,q∗a, q∗t }, let r∗(m, n, e) denote the decision probability with mbad nodes, n good nodes and IDS with error probability e.Then, r∗(m,n, e) = max{r∗(m + 2n · e(1− e), n · (1− 2e), 0),

e1−λ

}. The judgment system satisfies the constraint pt >“n

n−1

” `1 + m

n

´pf if m+2n ·e(1−e) < n · (1−2e), that is, m

< n · (1 − 4e + 2e2); equivalently, in order to tolerate m > 0bad nodes we need e < 1− 1√

2≈ 0.293. The scheme therefore

keeps secure against abuse of the attacker (malicious nodesprofit by revoking good nodes) as long as m

n< 1 − 4e + 2e2.

Figure 6 shows how the tolerable ratio of malicious nodesdecreases with a larger IDS error rate.

Figures 7 and 8 show the good nodes’ and the bad nodes’decision probabilities assuming an IDS error probability of0.15. We note that setting e = 0.15 limits the maximumnumber of m to m < 0.445n, what is similar to ratio of 30%of malicious nodes.

5.4 Meeting the RequirementsIn Table 3 we have formulated the requirements for the

judgment system’s accuracy to ensure that our karmic-suicidescheme (A) gives positive incentive to honest nodes to revoke

and (B) prevents the adversary from abusing the scheme. Asour analysis shows, for IDS without errors, our judgementsystem meets both requirements for m < n, i.e. for an ratioof up to 50% malicious nodes. For IDS with error rate e thejudgement system still meets requirement (A) qt > qf (b=1)for m < n (also illustrated in Figures 5 and 8). Requirement(B) pt > n

n−11+ m

nis met as long as m

n< 1−4e+2e2 (Figure

6).

6. THE REVOCATION GAMEIn Section 4 we have defined the cost and benefit parame-

ters of our karmic-suicide scheme and derived the minimumrequirements for the judgment system’s accuracy. Section5 has shown that our judgment system (for a range of set-tings) is sufficiently accurate to prevent abuse large numbersof malicious nodes whilst allowing for the possibility for hon-est nodes to revoke. In this section we look to answer thequestion whether our incentive for honest nodes to revoke issufficient, and if so, how quickly honest nodes will revoke ma-licious nodes. We take a game-theoretic approach (using adescending price auction) and show that our scheme providesrational (honest but selfish) nodes with incentive to suicide.Our results show that even for a small network density, honestnodes begin to revoke when their internal IDS error proba-bility, e, falls below 25%.

6.1 Design of the GameFrom an honest node’s perspective every other honest node

is in direct competition for reward. Unlike malicious nodes,honest nodes do not know which nodes are malicious and in-stead must rely on their intrusion detection system to deter-mine node affiliation. A node must collect enough evidence ofmalicious behavior to issue a suicide note, however, in doingso, there is a tension between waiting too long (and missingthe opportunity to make a profit) and making an incorrectassessment (and making a loss). As each malicious node canonly be revoked once; waiting too long to collect more evi-dence and decrease the chance of a false positive, might resultin another node revoking the malicious node first. Even whenan honest node correctly revoked a malicious node, there isstill the risk that the TA makes a wrong decision and judgesthat the malicious node is honest with probability pf (see Ta-ble 1). Conversely, an honest node can also be fortunate andbe rewarded for an unjustified suicide with probability qf .

We design our karmic-suicide game as a game between Nhonest nodes, competing for the benefit of revoking one ma-licious node. In a MANET, several of these games can beplayed in parallel, and honest nodes may join games at dif-ferent times. To allow a clear analysis of our game, we inves-

297

Page 8: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

tigate a single game where all honest nodes join the game atthe same time.

Let P1, P2, . . . , PN , N ≤ n be the players, i.e. the honestnodes that are in the position to revoke a malicious node. Wedo not consider the malicious node as a player in the game asit cannot take any action. Furthermore, let 0 ≤ 1−eni ≤ 1 bethe probability that the IDS from player Pi correctly identifiesthe malicious node as defined in Section 4.1. Note, that forour game we are only interested in false negatives of the IDS,since nodes need to decide weather an alert for maliciousnessis correct or not (false negative). Then the expected benefitEi ∈ [−1, b] of node Pi for revoking the suspicious node is:

Ei = b · (1 − eni) · qt − eni · pt + eni · qf − b · (1 − eni) · pf

= b · (1 − eni) · (qt − pf ) − eni · (pt − qf )

= b · (1 − eni) · (qt − pf ) − eni · (qt − pf )

−eni · (pt − qf + pf − qt)

= ((1 + b) · (1 − eni) − 1) · (qt − pf ) − ( eni|{z}<0.5

) · (qa − pa| {z }≈0

)

≈ ((1 + b) · (1 − eni) − 1) · ( qt − pf| {z }:=c∈[0.3,0.8]

)

A crucial simplification in this derivation is to set qa−pa = 0.While qa−pa ≈ 0 for all ratios of 0.1 ≤ m

n≤ 1 (see Figures 4,

5 and 7, 8), it also holds for all ratios but mn

= 0.1, thatqa − pa ≤ 0. Consequently, setting qa − pa = 0 slightlydecreases Ei and gives us a lower bound on Ei, and thereforealso a lower bound on the agility of our scheme in the analysisat the end of the section. Recall, that as per Figures 4, 5 and7, 8, qt − pf is the range of [0.3, 0.8]. Thus, as the reward inour revocation scheme is an additional key, the value for b is1, an honest node has an expectation value of:

Ei = (2 · (1 − eni) − 1) · c, c ∈ [0.3, 0.8] (1)

An important observation here is that c does not influencethe agility of our karmic-suicide game. c is simply scalingdown all values of Ei, such that the keys would not have avalue of 1 but instead a value of c. Imagine, that we gave asingle key the value 2 instead of 1. Then Ei = 4·(1−eni)−2 =(2 · (1 − eni) − 1) · 2. Obviously, the value of the keys canbe defined as any positive value, but it has no influence onour game. We therefore continue our analysis for c = 1, i.e.Ei = 2 · (1 − eni) − 1, knowing that the analysis applies forall positive values of c.

Each of the nodes Pi can now decide at what value of 1 −eni it revokes a suspicious node, i.e. how much evidence itrequires to collect before revoking a node. Since we assumethat all nodes join the game at the same time and gatherthe same information about the malicious node, 1 − eni willbe equal for the honest nodes at all times during the game.Consequently, the node that goes in at the lowest value for1 − eni, will revoke the suspicious node. All other honestnodes missed the chance to revoke this node and thus theirchance to gain the profit b.

6.2 Strategies and ResultsIn this section we investigate optimal strategies for honest

nodes in our karmic-suicide game and examine:

• whether the configuration of the karmic-suicide schemeadequately incentives the revocation of malicious nodes;

• the influence of the number of playing nodes (networkdensity) to the agility of the karmic-suicide scheme.

We assume that all honest nodes act rationally and try tomaximize their benefit (number of identifier/keys they pos-sess). However, the best strategy for a given player will de-pend upon the strategies adopted by other participating play-ers in the game. We assume each node knows the distributionof the other players’ strategies (by observation from earliergames). To determine an optimal strategy, game theoreticanalysis introduces the fundamental concept of the Nash-equilibrium. A set of strategies, (one strategy for each player)is called an Nash-equilibrium if no player can increase theirpayoff by unilaterally changing their strategy. Our first step inanalyzing our karmic-suicide scheme is therefore to determinethe Nash-equilibrium of our game.

6.3 Karmic-Suicide as a Dutch AuctionIn this sub-section we demonstrate parallels between our

karmic-suicide game and a Dutch auction without attendancefees (i.e. the profit is 0 if a buyer does not win the auction). Ina Dutch auction, a seller, with a single object for sale, wishesto sell their object to one of N buyers for the highest possibleprice. The seller begins with an (unrealistically) high priceand successively lowers the price over a number of rounds.The first bidder to place a bid wins the object at the currentprice quoted in that round.

The Dutch auction can be defined in the independent, pri-vate values model6: Buyer Pi’s value for the object, vi, istaken from the interval [0, 1] ⊂ R according to a distribu-tion function Fi(vi), i.e. Fi(vi) denotes the probability thatPi’s value is less than or equal to vi. It is assumed thateach buyer knows their own value but not the values of otherbuyers. However, the distribution functions F1, . . . , FN arepublicly known. If a buyer wins an auction at a price p, theirprofit is vi − p.

In our karmic-suicide game, we model our Dutch auction asfollows: the object to be sold is the suspicious node and thebuyers are the honest nodes who are collecting evidence aboutthe suspicious node’s (mis)behavior. The node who takes thehighest risk and revokes the suspicious node earliest obtainsthe chance of getting rewarded for their effort by the TA.Table 6.3 shows the mapping between a “traditional” Dutchauction and our karmic-suicide game.

Table 4: Linkage between the Dutch auction and thekarmic-suicide game

Dutch auction karmic-suicide game

buyer: Pi honest node: Pi

value: vi risk appetite: 1 − ci

price: p risk taken: 1 − Ei

Fi(vi) risk appetite distribution: Fi(1−ci)profit: vi − p profit: (1 − ci) − (1 − Ei) = E − ci

seller reduces price malicious node leaks evidence

The buyers Pi are simply replaced by the honest nodes thatcollect evidence about suspicious node activity. A buyer’s,Pi, private value, vi, for the object in the Dutch auction isthe node Pi’s risk appetite (1 − ci) to revoke the suspicious

6Independent here expresses that each buyer’s private infor-mation (the buyers value for the object) is independent ofevery other bidder’s private information.

298

Page 9: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

node. For the clarity of later calculations, we define risk ap-petite as ‘1 − desired certainty in revoking a node’; whereci ∈ [0, 1] ⊂ R represents the lowest expectation value forbenefit that a node Pi will tolerate to revoke a node. Theexpectation value Ei (Eq. 1) of all nodes Pi is the same,i.e. Ei = E, i = 1, . . . , N , since we assume for simplicity ofthe game that the nodes P1, . . . , PN hold the same evidenceat a given time. The risk that a node takes to revoke thesuspicious node as a result of a false positive of the intru-sion detection system is thus 1 − E. F is consequently therisk appetite distribution function and depends on the ac-curacy of the underlying intrusion detection system. We willinvestigate our game in light of different characteristics of theintrusion detection functions in Section 6.4.

A node profits if it revokes a suspicious node, and if therisk that it finally took was less then its original risk appetite.Consequently, the profit in our karmic-suicide game is ’riskappetite ’ − ’risk taken’. The profit can also be interpretedas the difference between the expectation value E (gainedcertainty) and a node’s originally desired assurance that theywill make the correct decision, i.e. E − ci.

As both value and price in our Dutch auction need to bedrawn from the interval [0, 1], 1 − E, and thus E, must alsobe a value between 0 and 1. We therefore restrict our anal-ysis to the interval E ∈ [0, 1]. As per our definition of theexpectation value in Equation 1, the accuracy requirement ofour IDS in a karmic-suicide game must lie between 0.5 and 1.

6.4 Influence of the Supporting IDSUp until now we have shown how our karmic-suicide game

can be modeled as a Dutch auction. However, we have not yetsystematically explored the distribution function F . Recallthat in Section 6.3, we saw that F is the probability distri-bution function for a node’s risk appetite, i.e. Fi(v) denotesthe probability that Pi’s risk appetite is less than or equal tov. For example, if we assume that node Pi chooses their riskappetite uniformly distributed in [0, 1], then Fi(v) = v.

We define F as a function of the accuracy of the intrusiondetection system. We will investigate our game applied to twointrusion detection systems (note: we are only interested in aqualitative definition of the IDS’s accuracy, since quantitativevalues vary with each IDS). Our functions for representing anIDS’s accuracy, takes as input the number of rounds r (seeSection 4.1) and returns the probability 1 − eni(r) that theIDS from node i correctly identifies the misbehaving node.To determine F , it is necessary to define the distribution ofr. Since we assume each round to represent a time intervalof similar length, r is uniformly distributed.

Our first IDS function is artificially simple to allow easierinsight into later calculations and results. It is defined by:

1 − eni(r) = r/100, r ∈ [50, 100] ⊂ N (2)

where r ≥ 50 follows from ids ≥ 0.5 and the game is playedfor r = 100 rounds.

As Fi(v) is the probability that Pi’s risk appetite, 1 − Ei

≤ v, F can be expressed as (see Appendix A for proof):

Fi(v) = v (3)

Our second IDS accuracy function, F , is representative ofreal IDS functions. That is, it is impossible to determine withcomplete certainty that a node is malicious (see Equation 4).The closer to 1 the probability is to detect a malicious node,the more evidence (number of rounds) is required to push the

probability closer to complete certainty. Figure 9(b) showsthe graph of this IDS function for q = 0.03. The designof the function can be explained as follows: Each round ofmalicious behavior proofs the suspicious node to be hostilewith a probability of q. After a certain number of rounds, r,the probability to not prove a malicious node’s affiliations is(1−q)r. Consequently the IDS’s accuracy can be expressed byEquation 4. Since we only want to give a qualitative analysisof an IDS’s influence, a concrete choice of q does not influenceour results.

1 − eni(r) = 1 − (1 − q)r (4)

The risk appetite distribution function F that results fromthis IDS function is:

Fi(v) = 1 − log(1−q)v2

100(5)

Figure 9(b) shows a plot of this IDS function.

6.5 EquilibriaIn a Dutch auction each bidder has the single decision to

make, at what price should he bid. A Dutch auction has ex-actly one equilibrium which can be interpreted as the biddersbest decision [8]:

Definition 1 (Dutch Auction Nash-equilibrium). IfN bidders have independent private values drawn from thecommon distribution, F , then bidding when the price reaches

1

F N−1(v)

Z v

0

xdF N−1(x) (6)

constitutes a symmetric Nash equilibrium of a Dutch auction.

We can now calculate the equilibrium for both of our dis-tribution functions F from Section 6.4. For our simple IDS asdefined by Equation 2, with distribution function F (v) = v,the equilibrium is given by (see Appendix A for proof):

b(v) = v − v

N(7)

The resulting equilibrium for our second IDS, as defined inEquation 4, cannot be expressed in a closed-form expression,but needs to be calculated for each input value separately.The graph in Figure 10(b) shows a plot of this equilibriumfor N = 2, 5, 10. Accordingly, the equilibrium from Equation7 is plotted in Figure 10(a) for N = 2, 5, 10.

6.6 ResultsThe equilibria from Section 6.5 allows us to analyze both

the revocation behavior of honest nodes as well as the agility(speed at which our scheme reacts to malicious behavior) ofour karmic-suicide scheme. We begin with an analysis of theequilibria for both IDS functions and conclude with an exam-ination of the agility of our revocation scheme.

Figure 10 shows the equilibria for both IDS functions fordifferent number of honest nodes N = 2, 5, 10, where eachnode competes for the potential reward by revoking a sus-pected malicious node. The graph F in these figures showsthe risk appetite distribution function; where F (v) is theprobability that a node chooses a risk appetite value less thanor equal to v. This function is the key for calculating the equi-libria for Equation 6.

299

Page 10: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

number of rounds out of 100: r

1−en

i(r)

(a) IDS system with linear accuracy.

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

1−en

i(r)

number of rounds out of 100: r

(b) IDS with asymptotic accuracy.

Figure 9: Simplified and realisticIDS accuracy functions.

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

risk appetite: 1−ci

risk

take

n: 1

−Ei

FN=10N=5N=2

(a) IDS system with linear accuracy.

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

risk appetite: 1−ci

risk

take

n: 1

−Ei

FN=10N=5N=2

(b) IDS with asymptotic accuracy.

Figure 10: Equilibria for thetaken risk.

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

number of rounds: r

p. th

at b

ad n

ode

is r

evok

ed

N=10N=5N=2

(a) IDS system with linear accuracy.

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

number of rounds: r

p. th

at b

ad n

ode

is r

evok

ed

N=10N=5N=2

(b) IDS with asymptotic accuracy.

Figure 11: Probability that a ma-licious node is revoked.

Both Figures 10(a) and 10(b) illustrate the tendency ofnodes to adopt risk seeking behavior if the number of com-petitors, N , is large. A node that accepts a risk (1−ci) of 0.5in Figure 10(a) will, in the presence of a single competitor(N = 2), not revoke a node before the actual risk (1 − E)falls below 0.25. If this node is the first one to revoke thesuspect node, it will have (on average) a profit of 0.25. Thismeans that their expectation value, as defined in Equation 1,exceeds their minimally desired expectation value (certaintylevel ci) by 0.25. As the number of competitors to revoke thesuspect in this scenario increases to N = 5 and N = 10, thenodes will increase their willingness to take a higher risk. Fig-ure 10(b) shows similar results as Figure 10(a). However, themajor difference is that for the asymptotic IDS system, thenodes take lower risks. Even if a node has a maximum riskappetite of 1 and 4 competitors (N = 5), an honest node willwait to revoke a suspected malicious node until its risk is ap-proximately 0.6. As our results in Figure 11 show, this smallwillingness to take a risk significantly decreases the agility ofthe revocation scheme for for a small number of competitors.

The results from Figure 10 show that our karmic-suicidescheme gives honest nodes an incentive to revoke a poten-tially malicious node, if their IDS provides sufficiently accu-rate information. Based on the accuracy of the IDS we willnow examine the agility of our karmic-suicide scheme, i.e. seewhat amount of evidence is required for a malicious node tobe revoked by an honest node.

To this end, we calculate the probability that after a certainnumber of rounds, r, at least one of the honest nodes willrevoke the suspected malicious node. As shown in AppendixA, the probability that at least one of N competing nodesrevoked the malicious node after a certain number of roundsr is:

1 − `1 − F

`b−1(2 − 2 · (1 − en(r)))

´´N(8)

Figure 11 shows the probability that a malicious node isrevoked after a certain number of rounds r. Note, that welimited the maximum available evidence in our karmic-suicidegame to 100. In case of the linear IDS (Fig. 11(a)), a bit

more than 50% of the maximum available evidence is requiredto give the nodes any incentive to revoke a suspicious node.This can be explained by the fact that the linear IDS exceedsa detection probability of 50% and thus the threshold to apositive expected benefit at this point. The probability that amalicious node is revoked increases to almost 1 as the evidencereaches 100% even if only two nodes (N = 2) are competingto revoke the malicious node. A greater number of honestnodes increases the chance for an early revocation. As Figure11(a) shows, for N = 10, 70% of evidence are sufficient torevoke the malicious node with a probability close to 1.

In the case of our asymptotic IDS (Fig. 11(b)), the numberof competitors has a similar influence on the agility of revoca-tion. However, nodes revoke earlier, since the IDS exceeds adetection probability of 50% at approximately round r = 27.In the case of 10 competitors, 35% of the evidence is suffi-cient to have a probability close to 1 that at least one of thenodes revokes the suspect malicious node. However, 60% arerequired to achieve such a high probability for N = 5 and forN = 2 the probability for revoking the malicious node doesnot exceeds 95%. This observation is in accordance with theresults from Figure 10(b) that shows a low risk appetite fora small number of competitors.

As our analysis has shown, the agility of both IDS reliessignificantly on the number of honest nodes that compete torevoke a malicious node. However, even for a minimum num-ber of 2 honest nodes, the malicious node is revoked withhigh probability as enough evidence is collected. These re-sults show that our karmic-suicide revocation scheme worksin a network environment with imperfect intrusion detectionsystems on the nodes’ side and furthermore with an imperfectjudgment system.

7. CONCLUSIONIn contrast to the current ad hoc networking literature for

node revocation which suggests a laissez-faire approach to re-vocation management, we have shown how the introduction ofa TA (analogous to the type of trusted third party assumed in

300

Page 11: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

key distribution research) can aid the key revocation processof networks and encourage more optimal revocation decisionmaking by honest nodes.

As part of our work, we designed a new incentive mech-anism called karmic-suicide that encourages rational nodesto sacrifice short-term personal utility in favor of longer termgains. In doing so, we examined the inter- and intra-dependen-cies between TA-level judgment mechanisms and node-levelrevocation mechanisms to facilitate (near) optimal revocationdecision making. Following from this, we presented a gametheoretic analysis of our scheme and showed how high er-ror rates in a node’s ability to accurately determine whetheranother node is malicious can be compensated by higher net-work densities, resulting in the rapid revocation of maliciousnodes.

Our work therefore is in agreement with various experi-mental economic studies which have demonstrated that, whenthere is contention for a public resource, the judicious applica-tion of punishment and rewards yields an efficient mechanismof incentivizing desirable behavior [26, 2].

AcknowledgementsThis research was sponsored by the U.S. Army Research Lab-oratory and the U.K. Ministry of Defence and was accom-plished under Agreement Number W911NF-06-3-0001. Theviews and conclusions contained in this document are thoseof the author(s) and should not be interpreted as representingthe official policies, either expressed or implied, of the U.S.Army Research Laboratory, the U.S. Government, the U.K.Ministry of Defence or the U.K. Government. The U.S. andU.K. Governments are authorized to reproduce and distributereprints for Government purposes notwithstanding any copy-right notation hereon.

8. REFERENCES[1] G. Arboit, C. Crepeau, C.R. Davis, and

M. Maheswaran. A Localized Certificate RevocationScheme for Mobile Ad Hoc Networks. Ad HocNetworks, 6(1):17–31, 2008.

[2] H. Brandt, C. Hauert, and K. Sigmund. PunishmentAnd Reputation In Spatial Public Goods Games. InProceedings of the Journal of Biological Science,270(1519):1099–1104, 2003.

[3] H. Chan, V.D. Gligor, A. Perrig, and G. Muralidharan.On the Distribution and Revocation of CryptographicKeys in Sensor Networks. IEEE Transactions onDependable and Secure Computing, 2(3):233–247, 2005.

[4] H. Chan, A. Perrig, and D. Song. Random KeyPredistribution Schemes for Sensor Networks. InProceedings of the 2003 IEEE Symposium on Securityand Privacy (S&P 2003), pages 197–213. IEEEComputer Society, May 2003.

[5] J. Clulow and T. Moore. Suicide for the CommonGood: A New Strategy for Credential Revocation inSelf-organizing Systems. ACM SIGOPS OperatingSystems Reviews, 40(3):18–21, 2006.

[6] R. Dutta and S. Mukhopadhyay. Designing ScalableSelf-healing Key Distribution Schemes with RevocationCapability. In Parallel and Distributed Processing andApplications, volume 4742 of LNCS, pages 419–430.Springer, 2007.

[7] L. Eschenauer and V.D. Gligor. A Key-ManagementScheme for Distributed Sensor Networks. In Proceedingsof the 9th ACM conference on Computer andcommunications security (CCS 2002), pages 41–47.ACM Press, November 2002.

[8] A. Geoffrey, J. Philip, and J. Reny. AdvancedMicroeconomic Theory. Addison Wesley, second edition,2000.

[9] K. Hoeper and G. Gong. Bootstrapping Security inMobile Ad Hoc Networks Using Identity-Based Schemeswith Key Revocation. Technical Report CACR 2006-04,Centre for Applied Cryptographic Research (CACR) atthe University of Waterloo, Canada, 2006.

[10] Y-C. Hu, A. Perrig, and D.B. Johnson. Packet Leashes:A Defense against Wormhole Attacks in WirelessNetworks. In Proceedings of IEEE Infocomm 2003,pages 1976–1986, 2003.

[11] ISO/IEC 11770-1:1996. Information technology –security techniques – key management – part 1:Framework, 1996.

[12] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D.Piatko, R. Silverman, and A. Y. Wu. An Efficientk-Means Clustering Algorithm: Analysis andImplementation. IEEE Transactions on PatternAnalysis and Machine Intelligence, 24(7):881–892, 2002.

[13] D. Liu, P. Ning, and K. Sun. Efficient Self-healingGroup Key Distribution with Revocation Capability. InProceedings of the 10th ACM conference on Computerand communications security (CCS 2003), pages231–240. ACM Press, 2003.

[14] W. Liu. Securing Mobile Ad Hoc Networks withCertificateless Public Keys. IEEE Transactions onDependable and Secure Computing, 3(4):386–399, 2006.

[15] H. Luo, P. Zerfos, J. Kong, S. Lu, and L. Zhang.Self-Securing Ad Hoc Wireless Networks. In Proceedingsof the Seventh International Symposium on Computersand Communications (ISCC’02). IEEE ComputerSociety, 2002.

[16] J. Luo, J.-P. Hubaux, and P.T. Eugster. DICTATE:DIstributed CerTification Authority with probabilisTicfrEshness for Ad Hoc Networks. IEEE Transactions onDependable and Secure Computing, 2(4):311–323, 2005.

[17] B. Mandeville. The Fable of the Bees or Private Vices,Publick Benefits. 2, 1724/1924.http://oll.libertyfund.org/Texts/LFBooks/

Mandeville0162/FableOfBees/0014-02\_Bk.html.

[18] B.J. Matt. Toward Hierarchical Identity-basedCryptography for Tactical Networks. In Proceedings ofthe 2004 Military Communications Conference(MILCOM 2003), pages 727–735. IEEE ComputerSociety, November 2004.

[19] J. M. McCune, E. Shi, A. Perrig, and M. K. Reiter.Detection of Denial-of-Message Attacks on SensorNetwork Broadcasts. In IEEE Security and PrivacySymposium, 2005.

[20] T. Moore, M. Raya, J. Clulow, P. Papadimitratos,R. Anderson, and J-P. Hubaux. Fast Exclusion ofErrant Devices From Vehicular Networks. InProceedings of the 5th conference on Sensor, Mesh andAd Hoc Communications and Networks (SECON 2008),pages 135–143, 2008.

[21] B. Parno, A. Perrig, and V. Gligor. Distributed

301

Page 12: The Fable of the Bees: Incentivizing Robust Revocation ...amnesia.gtisc.gatech.edu/~moyix/CCS_09/docs/p291.pdfThe Fable of the Bees: Incentivizing Robust Revocation Decision Making

Detection of Node Replication Attacks in SensorNetworks. In Proceedings of the 2005 IEEE Symposiumon Security and Privacy (S&P 2005), pages 49–63.IEEE Computer Society, 2005.

[22] M. Raya, D. Jungels, P. Papadimitratos, I. Aas, andJ.-P. Hubaux. Certificate Revocation in VehicularNetworks. Technical Report LCA Report 2006006,Laboratory for computer Communications andApplications (LCA) School of Computer andCommunication Sciences, Switzerland, 2006.

[23] M. Raya, M. Hossein Manshaei, M. Felegyhazi, andJ-P. Hubaux. Revocation Games In EphemeralNetworks. In Proceedings of the 15th ACM conferenceon Computer and Communications Security, pages199–210. ACM, 2008.

[24] R.L. Rivest. Can We Eliminate Certificate RevocationsLists? In Proceedings of the Second InternationalConference on Financial Cryptography (FC 1998),pages 178–183, London, UK, 1998. Springer-Verlag.

[25] D. Roberts, G. Lock, and D.C. Verma. Holistan: AFuturistic Scenario for International CoalitionOperations. In In Proceedings of Fourth InternationalConference on Knowledge Systems for CoalitionOperations (KSCO 2007, 2007.

[26] K. Sigmund, C. Hauert, and M.A. Nowak. Reward andPunishment. In Proceedings of the Journal of theNational Academy of Science, 98:757–762, 2001.

[27] R. Anderson T. Moore, J. Clulow and S. Nagaraja.New Strategies for Revocation in Ad-Hoc Networks. InProceedings of the 4th European Workshop on Securityand Privacy in Ad Hoc and Sensor Networks (ESAS2007), pages 232–246. Springer, July 2007.

[28] Y. Wang, B. Ramamurthy, and X. Zou. KeyRev: AnEfficient Key Revocation Scheme for Wireless SensorNetworks. In Proceedings of the 2007 IEEEInternational Conference Communications (ICC 2007),pages 1260–1265. IEEE Computer Society, 2007.

[29] D. J. White and C. Ed. White. Markov DecisionProcesses. Wiley, John & Sons, Incorporated, 1 edition,1993.

[30] S. Yi and R. Kravets. MOCA: Mobile CertificateAuthority for Wireless Ad Hoc Networks. In The 2ndAnnual PKI Research Workshop (PKI 03), 2003.

[31] Y. Zhang, W. Liu, W. Lou, Y. Fang, , and Y. Kwon.AC-PKI: Anonymous and Certificateless Public KeyInfrastructure for Mobile Ad Hoc Networks. InProceedings of the International Conference onCommunications (ICC 2005), pages 3515–3519. IEEEComputer Society, May 2005.

[32] L. Zhou and Z.J. Haas. Securing Ad Hoc Networks.IEEE Network, 13(6):24–30, 1999.

Appendix A

Proof of Equation 8. The probability that a single noderevokes the malicious node after the risk falls below a value1 − E is the probability that their respective risk acceptancevalue is less than or equal to 1 − E. The risk appetite valueto a take risk 1 − E can be calculated by the inverse ofthe equilibrium, i.e. by b−1(1 − E). The probability thata node chooses this risk appetite value is F (b−1(1 − E)) (bythe definition of F ). Consequently, the probability that a

single node revokes the suspect malicious node is thereforeF (b−1(1−(2·(1−en(r))−1))) = F (b−1(2−2·(1−en(r)))). Fi-nally, the probability that at least one of N competing nodesrevoked the malicious node after a certain number of roundsr is:

1 − `1 − F

`b−1(2 − 2 · (1 − en(r)))

´´N

Proof of Requirements in Table 3.

(A) ⇒ 2) :

qt >1

bqf

⇒ bqt − qf > 0 �(B) ⇒ 1) :

pt >

„n

n − 1

«„1 +

m

n

«pf

⇒ pt > pf

⇒ pt > bpf (b ≤ 1)

⇒ bpf − pt < 0 �(B) ⇒ 3) :

pt >

„n

n − 1

«„1 +

m

n

«pf

⇒ 1

npt >

n + m

(n − 1)npf

⇒„

m + 1

n − 1− m

n

«pf − 1

npt < 0

⇒„

m + b

n − 1− m

n

«pf −

„m

n− m − 1

n

«pt < 0 �

Proof of Equation 3.

Fi(v) = P (1 − E < v)

= P (1 − (2 · (1 − ei) − 1) < v)

= P (1 − ` 2r

100− 1| {z }

=:r∈[0,1]

´< v)

= P (1 − r < v)

= 1 − P (r < 1 − v)

= v

The last step follows, since r is uniformly distributed over[0, 1].

Proof of Equation 7.

b(v) =1

vN−1

Z v

0

xdxN−1

=1

vN−1

Z v

0

x(N − 1)xN−2dx

=1

vN−1

Z v

0

xN−1(N − 1)dx

=vN

vN−1

N − 1

N

= v − v

N

302


Recommended