+ All Categories
Home > Documents > Relating Reputation and Money in On-line Markets fileReputation in on-line economic systems is...

Relating Reputation and Money in On-line Markets fileReputation in on-line economic systems is...

Date post: 18-Feb-2019
Category:
Upload: dotu
View: 215 times
Download: 0 times
Share this document with a friend
12
Relating Reputation and Money in On-line Markets Ashwin Swaminathan , Renan Cattelan , Ydo Wexler , Cherian Varkey Mathew , and Darko Kirovski University of Maryland, College Park, MD, USA Universidade de S˜ao Paulo, S˜ao Carlos, SP, Brazil Indian Institute of Technology, Kanpur, India Microsoft Research, Redmond, WA, USA Contact: [email protected] Technical Report MSR-TR-2008-023 February 2008 Microsoft Research One Microsoft Way Redmond, WA 98052, USA http://research.microsoft.com
Transcript

Relating Reputation and Money in On-lineMarkets

Ashwin Swaminathan♣, Renan Cattelan♦, Ydo Wexler♥,Cherian Varkey Mathew♠, and Darko Kirovski♥

♣ University of Maryland, College Park, MD, USA♦ Universidade de Sao Paulo, Sao Carlos, SP, Brazil♠ Indian Institute of Technology, Kanpur, India♥ Microsoft Research, Redmond, WA, USA

Contact: [email protected]

Technical Report MSR-TR-2008-023February 2008

Microsoft ResearchOne Microsoft Way Redmond, WA 98052, USA

http://research.microsoft.com

Relating Reputation and Money in On-line Markets

ABSTRACTReputation in on-line economic systems is typically quanti-fied using counters that specify positive and negative feed-back from past transactions and/or some form of transactionnetwork analysis that aims to quantify the likelihood that anetwork user will commit a fraudulent transaction. Theseapproaches can be deceiving to honest users from numerousperspectives. We take a radically different approach with agoal to guarantee to a buyer that a seller cannot disappearfrom the system with profit following a set of transactionsthat total a certain monetary limit. Even in the case ofstolen identity, an adversary cannot produce illegal profitunless a buyer decides to pay over the suggested sales limit.

1. INTRODUCTIONIn the recent decade C2C1 markets have flourished on the

Web via numerous economic opportunities with eBay as theiconic example [1]. Significant amount of fraud in such mar-kets occurs as consumers build up their reputation either viaseveral fabricated or small-cost transactions, to target a finalfraudulent high-cost transaction after which they disappearfrom the market. Detailed statistics about the prevalenceof such transactions are not available publicly. The marketleader, eBay, claims that one in thousand listed productsends up being a bait for a fraudulent transaction [2]. At first,this figure seems encouraging, however, it relates to “listed”products only, not sold. Also there is no quantifiable dataon the cost of fraud on popular on-line marketplaces suchas eBay and Amazon.com. In 2006, consumers reported tothe Internet Crime Complaint Center (IC3) approximatelyUS$90M of losses to auction-related fraud in the US only [3].We speculate that the problem is vastly underreported andimportant, and pursue a novel, both logistically and techni-cally, consumer reputation system for on-line markets.

If we assume that fraud is a sudden shift in behavior by aseemingly “honest” economic entity, then we can concludethat by definition, fraud prediction is an ill-defined prob-lem. To address this issue, we define seller reputation as amonetary value that a buyer should feel comfortable pay-ing, knowing that by committing fraud the seller still can-not make profit from its existence in the market. The objec-tive is to quantify reputation using a deterministic economicvalue as opposed to a probabilistic predictor. Even in thecase of stolen identity, an adversary cannot produce illegalprofit unless a buyer decides to pay over the suggested sales

1Consumer-to-consumer (C2C) e-commerce involves electroni-cally facilitated transactions between consumers via a third party.

limit. The sales limit of an individual seller is built using arecord of her transaction fees, verifiable types of transactioncosts (insurance, arbitration, shipping, etc.), and deposits.To further strengthen buyer’s perspective, we enable eachseller to establish a reimbursement fund used as a guaranteethat defrauded buyers will get fully or partially reimbursed.Here, we present the novel deterministic reputation system,we outline a strategy for managing sales limits that maxi-mizes selling power in the market, and propose a probabilis-tic strategy for risk assessment that aims at helping buyersestimate the risk of paying for a product or service over thesales limit. The latter effort follows more closely the relatedwork in fraud-prediction.

As a simple motivational example, we remind the readerthat in off-line markets reputation of small retailers is typ-ically gained by investments in the location and decor ofthe retail store as well as with years of conducting trustwor-thy business. Robust reputation is valuable to merchants asit can enable them to select a desired spot on the volumevs. pricing curve for marketed products/services. Equiva-lently, in our system a seller must invest funds either in theform of a deposit or transaction fees to offset her maximumselling power at a given moment. Thus, we speculate thatthe system is acceptable from seller’s perspective while itspurs confidence with prospective buyers.

1.1 ContributionsExisting reputation systems reviewed in the Appendix,

model reputation using a probabilistic system with an objec-tive of providing side information to help users predict mali-cious transactions. We depart from this traditional methodfor quantifying reputation, and aim at pricing reputation.From buyer’s viewpoint, we offer three tools to strengthenbuyer’s confidence in (not) participating in an on-line trans-action: i) a reimbursement fund, ii) a monetary limit onpricing that guarantees that even if the seller committedfraud on all pending transactions, she would still not walkout of the economic ecosystem with profit, and iii) a sim-ple user interface to a tool that presents to the user theprice vs. the probability that the seller commits a fraudulenttransaction. Clearly, proposition i) is not novel as escrowsand/or investment insurance have been part of trade mar-kets for centuries. We build our proposal on top of such aninsurance system with propositions ii) and iii). To the bestof our knowledge this is the first reputation system tailoredto on-line markets that exhibits such features.

2. THE ECONOMIC MODELIn this section, we formally introduce a market network

and define a two-party transaction as an economic functionin the system. Let C = {c1, . . . , cN} be a cardinality-N setof nodes in a graph G, where each node ci models a distinctconsumer. For now we describe a transaction as an exchangeof economic value between a buyer and a seller; we definea simple transaction model later, in Subsection 3.2. In thecase of a C2C market the buyer pays using a cash equivalentfor a product or service offered by the seller. Any node inthe graph can be a buyer or a seller in a transaction.

We formally define a committed transaction t(ci, cj)between a buyer ci and a seller cj as a weighted directededge ci → cj where the weight w(ci, cj) ≡ wij is a realnon-zero scalar such that:

• wij > 0 : transaction was executed at the satisfactionof both the seller and the buyer with wij equal to thetransaction costs.

• wij < 0 : transaction was fraudulent with −wij pro-portional to the cash equivalent paid by the buyer.The buyer suffered financial loss.

We denote as T and W the sets of all edges and theirweights in the market graph G.

We model pending transactions in the network as a setP = {p1, . . . ,pN} of arrays of values available for sale at cor-responding nodes. A transaction is pending until its buyerand seller do not reach a closure on their satisfaction withthe transaction; then the transaction becomes committed.An array pi = {p1, . . . , pLi} is a list of Li values that sellerci is currently selling. We allow that products’ prices formusing an arbitrary negotiation mechanism. Each individualprice, pk, is formed as an asking price (if the seller does nothave a buyer yet) or as a winning bid (in case there existsan arbitrary auctioning mechanism). In order for a buyer tolearn about a specific product sold by any seller, we allowarbitrary marketing strategies in our model.

Finally, note that our model does not consider the repu-tation of winning bidders, i.e., nodes with the highest offerfor a specific item sold by a seller. Thus, it does not linkspecific nodes with products itemized in pi. As opposed torelated work where node connectivity is used to construct areputation model (e.g., [12]), in our system the reputationof current bidders does not affect the reputation score of theseller, thus, this limitation of our model is appropriate. Sys-tems that target detection of shill bidding typically rely onthis type of data. Although this is not the goal of our paper,we still mention that this data can be tracked by modelingpending transactions the same way as committed transac-tions with the necessary relinking to model outbidding.

The considered economic network model includes the di-rected weighted graph G(C,P,T,W), where pending trans-actions are still negotiated. Based upon this model, in thispaper we construct the proposed reputation system.

2.1 Model AccuracyAt present time, most popular online markets such as

eBay and Amazon.com have built large economic ecosys-tems that could be used to quantify certain parameters inthe model. The first anticipation is that N tends to berather large for these systems. For instance, eBay recorded

around 82 million active users in 2006; this number has beenincreasing by around 15% every year [4].

Linking our model to an existing marketplace network is adifficult task from several perspectives. First, the number oftransactions on marketplaces such as eBay or Amazon.com,is growing at a faster rate than a modest academic crawlercould possibly browse. Second, fair random sampling of ex-ceptionally large graphs is a problem of well-known difficulty[19]. Since we do not base our core primitives for buildinguser reputation on network features such as average fan-in, fan-out, etc., we decided to constrain our marketplacesnapshot in Section 5 to address transactions with negativefeedback, not to determine our model parameters so to ac-curately mimic a typical marketplace network.

2.2 Buyer’s FeedbackTypically both participants in a transaction provide feed-

back to each other. The feedback score, i.e., reputation,is recorded for public viewing and typically summarized inthe form of positive and negative points. Although severalreputation scores systems have been proposed [12, 13], theyare not fool-proof – buyers can still be easily deceived byfraudulent sellers who have a very good reputation score.Trivial approaches to building up a positive transaction his-tory include: fabricated transactions with friends or non-existent consumers (e.g., established using stolen identities)or, in the case which is the most difficult to prevent usingprobabilistic recommendation systems, relatively long-termhonest sales behavior until a “major” fraudulent transactionfetches significant profits for the adversary.

Here it is important to stress that our model does notaddress the negative feedback that a buyer could receive.Such feedback is typically posted for failure to pay an itemthat the buyer won in an auction. Although the seller suffersfinancial loss due to delay of sale, this type of transactionoutcome is still not considered fraudulent. For example, theAmazon.com Marketplace does not display buyer feedbackon its system, thus lets sellers treat all buyers equally [5]. Tothat extent, we note that reputation of a buyer as a reliablepayee could be handled efficiently using existing reputationsystems and we chose not to address this issue.

2.3 On-Line Dispute Resolution SystemsOur reputation system complements existing on-line dis-

pute resolution (ODR) systems such as SquareTrade [6, 7],in the extent that it aims at preventing/handling fraudulenttransactions that SquareTrade cannot handle due to sellernon-cooperation. Needless to say, ODR and insurance sys-tems are orthogonal with respect to reputation systems intheir effect on the marketplace as they address mostly non-fraudulent disputes. Thus, for brevity and simplicity in thispaper we do not analyze ODR and insurance systems.

3. REPUTATION QUANTIFIERSIn addition to the IC3 report, a recent survey by the Na-

tional Fraud Information Center (NFIC) presented statisticsthat in 2006, an average loss for an Internet fraud reported toNFIC totalled US$1512 [8]. The top two types of fraud: auc-tions and general merchandise, accounted for 34% and 33%of all reported fraudulent activity with an average loss ofUS$1331 and US$1197 per case respectively. Clearly, lossesexperienced by consumers undermine the popularity of on-line markets such as eBay or the Amazon.com Marketplace.

Existing reputation systems are in place in such markets topredict fraudulent activity [1, 5] – however, they are notfool-proof. To address this issue, we propose a reputationsystem whose objective is not to probabilistically aid pre-diction of fraud which is common practice – but to assurebuyers of deterministic pricing tactics that cannot profit theseller in case of a fraudulent transaction. We model seller’sreputation using the following two monetary values: a saleslimit and a reimbursement fund.

Definition 1. Sales limit αi for a specific user ci isan upper bound on pricing pi such that if ci commits fraudon each item offered in pi she can still not profit from herexistence in the market as a consumer.

By definition, αi is set such that ci could not make profitin the system if: X

∀pj∈pi

pj ≤ αi. (1)

Definition 2. Reimbursement fund βi for a specificuser ci is a sum of money that can be used to offset lossesto buyers who participate in pending transactions with ci incase ci commits fraud.

In our system, each seller ci chooses the value of βi ac-cording to her required selling power. In general, consumersfeel comfortable bidding to products from ci knowing thatany fraud would get fully reimbursed if pricing is such that:X

∀pj∈pi

pj ≤ βi. (2)

If pricing on pi is over βi and fraud is committed with lossesgreater than βi, FIFO is one possible fair algorithm for usingthe reimbursement fund among the defrauded consumers.A reimbursement fund could be implemented via escrow ac-counts, transaction insurance, etc. Such funds are certainlynot a novel mechanism to protect buyers; the introductionof a sales limit, αi ≥ βi, and subsequent techniques to con-struct and use it, is our contribution in this work.

3.1 Risk TakingIn our system, a buyer could get defrauded if she chooses

to pay a price that sets πi =P∀pj∈pi

pj over βi. We con-

sider two cases. First, the buyer is unlikely to encounter afraudulent seller as long as she chooses to pay below seller’ssales limit, αi ≥ βi. Although the seller could certainlydefraud such a buyer, he would still not gain any profit.Repeating the iteration “open new user account, build rep-utation, then defraud” would still not profit an adversarybecause she would have to invest substantial funds to buildthe reputation of the fabricated user in each iteration tofinally claim back these funds at the expense of innocentbuyers. In the remainder of this section, we introduce analgorithm for constructing a sales limit as well as a tech-nique for time-sharing sales limits, i.e., risk, among marketparticipants in order to boost their selling power.

Second, if the buyer decides to pay over αi, the risk ofencountering a fraudulent transaction can be quantified de-pending upon the adversary’s profit, πi − αi. In Section 4,we introduce a simple, yet intuitive empirical model thataims to predict a fraudulent transaction based upon the in-centive: πi − αi.

����

���������

������ �

���������������

������������������� ���� ����

��� ��

��

Figure 1: Transaction model: entities and involvedcosts.

3.2 The Transaction ModelThe proposed reputation quantifier αi is computed based

upon consumer’s prior transaction record. In order to estab-lish an algorithm for its computation, we first adopt a simpletransaction model. Fig. 1 illustrates a diagram of the basicsystem model. We first review the transaction costs. Thecost of an individual transaction Ct = Cp + Cm + Ch, paid bythe buyer, is composed of three entities:

• the product price, Cp, which represents the totalamount of money after all costs received by the seller,

• the protocol manager fee, Cm, is paid to the medi-ator in the transaction, e.g., eBay or Amazon.com,

• the miscellaneous fee, Ch, which includes other feessuch as: arbitration insurance, shipping and handling,taxes, etc. The protocol manager (PM) may orches-trate some of these activities. All miscellaneous feesthat can be verified by a trusted party (e.g., PM) areused to establish participants’ sales limits.

Now we model an actual transaction between a seller,Sam, and a buyer, Brenda. Once the negotiation has com-pleted, Brenda pays the amount due, Ct, to Sam who nowpays the transaction fee, Cm, to the PM, both using an ar-bitrary payment system. The PM could offer a paymentservice for market participants2 to simplify payments andreduce transaction fees. As opposed to the reimbursementfund which could be implemented as an escrow account, thePM does not serve as an escrow for the cash flow between themarket participants. After receiving the transaction fee, thePM updates the accounts of all parties involved. Next, Samis now required to deliver the merchandise to Brenda. Hereis a list of considered outcomes upon merchandise delivery(or failure to):

{P} positive feedback – Brenda is satisfied with the out-come of the transaction; she compliments Sam.

{N} negative feedback – Brenda is dissatisfied with thereceived product; the participants in the transactiondecide to resolve the situation as follows:

{N.1} no refund – Sam accepts negative feedback andBrenda does not initiate the refund process. Thiswould be typical for a transaction with low Cp.

{N.2} refund/return – Sam refunds Brenda for pre-viously returned merchandise. If this process isclosed at Brenda’s satisfaction, the transactionrecord is deleted including Sam’s negative feed-back.

2This would be an equivalent to handling a transaction via eBayand PayPal.

{N.3} dispute – occurs in all other cases. This is themost interesting case, as it involves arbitrationand resources for refunding the plaintiff.

A recent study suggests that 41% of seller-targeted dis-putes occur because sellers do not describe their productsaccurately which results in complaints by buyers once theyreceive the products [9]. In about half of such cases, thebuyer chooses not to return the product; hence, we estimate{N.1} to account for about one fifth of all {N} cases. Asurvey of 225 {N.2} and {N.3} disputes on eBay in 1999[10] points to ≈ 25% of disputes that were resolved at mu-tual success, ≈ 25% of them at impasse, and the remaindernever entered the resolution stage although it was availablefor free as part of a study. Thus, to the best of our knowledgewe conclude that detailed statistics about dispute estimatesare unavailable; however, existing data points to a solid like-lihood that {N.3} cases are relatively common, yet due tounavailability of inexpensive and efficient ODR systems theyremain underreported.

3.3 Computing the Reputation QuantifiersIn this subsection, we evaluate how transaction outcomes

affect buyer’s and seller’s reputation quantifiers. The globalobjective for the developed algorithms is to maximize theselling power in the system. Achieving this objective is im-portant as it minimizes sellers’ investments to reach a spe-cific selling power, hence boosts the market economy, whichon the side of the market organizers results in higher profit.

Case {P}. We are motivated by the fact that for a spe-cific transaction t(cB , cS) all overhead costs, Co = Ct − Cp,can be attributed to both the buyer and the seller. In thissection, we assume that all miscellaneous costs can be veri-fied by PM. This assumption may not always be true – forexample, shipping costs, if not paid for via PM’s paymentsystem typically cannot be provably verified. The PM wouldsubtract expenses that cannot be verified from Co before ap-plying them to seller’s and buyer’s reputation quantifiers.

We could take a stance that the buyer pays a fair marketprice for the product that includes Co and that the selleris the one paying for transaction costs (i.e., in an off-linemarket this is certainly the case as the buyer pays the streetprice and the merchant offsets all costs to retain profit).In such a setting, after each committed transaction, seller’ssales limit increment due to a transaction t equals:

αS(t(cB , cS)) = wBS = Co. (3)

In essence, Co is the “loss” that the seller has with respectto fair market price. In another variant, the buyer and theseller could negotiate a shared application of the transactioncost during negotiation. This sets up a more general casefor computing sales limits:

αS(t(cB , cS)) = wBS = %Co, (4)

αB(t(cS , cB)) = wSB = (1− %)Co, (5)

where 0 ≤ % ≤ 1 is a parameter that scales the applicationof costs to buyer’s and seller’s sales limits. Note that in thiscase, an edge t(cS , cB) directed cS → cB is added to T withan appropriate weight factor.

Case {N.1}. Only seller’s reputation is affected by thiscase. Here, seller’s sales limit is reduced by:

αS(t(cB , cS)) = wBS = −Cp, (6)

if the PM can verify all miscellaneous costs. If this is not thecase, all non-verified costs are also subtracted from seller’ssales limit.

Case {N.3}. Disputes in on-line transactions are typ-ically resolved using PM’s or third party’s ODR systems[7, 6]. Costs related to ODR are included in Ch as insur-ance against this outcome. Possible outcomes for the ODRprocess are:

(i) resolution in favor of one of the participants in thetransaction; then this case is resolved as {P}, {N.1},or {N.2} with respect to the sales limit.

(ii) impasse; a bargaining impasse occurs when the twosides negotiating an agreement are unable to reach anagreement and become deadlocked3. This situation isdifficult to handle because possible solutions can hurtthe party who is innocent. Certainly, entities who planon participating in a transaction with either cS or cB

should know that they have been involved in this dis-pute. As long as the dispute is in impasse, seller’ssales limit is affected as defined in case {N.1}, (6)and buyer’s record shows participation in a deadlockeddispute.

(iii) lack of co-operation in the ODR process by the seller,cS ; typically a consequence of fraud. Such an outcomeof a transaction would reduce the sales limit of cS asdefined in case {N.1}, (6).

Fraud (case iii) is committed by sellers in vast majorityof cases. One way for a buyer to cause a serious misconductis to complain about a received product, agree to a returnfor refund, and then return a different, less valued productto the seller. Such cases are exceptionally infrequent andwould result in a criminal investigation.4 Our system doesnot protect sellers from such events.

The overall sales limit for a specific consumer, ci, is thencomputed as follows:

αi =X

∀t(cj ,ci)∈Ti

wji + βi, (7)

where Ti is a subset of all edges in T with ci as a destination.Equation (7) includes the reimbursement fund, βi, that

ci establishes to insure customers from potential fraud (seeDef.2). Typically, a new seller would deposit a specificamount βi(0) into its account with the PM to start up itsreputation, i.e., an initial sales limit of αi(0) = βi(0). Suc-ceeding sales would establish its sales limit. Then, ci canbalance the value of its reimbursement fund (this fund canbe lowered or increased on-demand) and thus adjust its saleslimit, to achieve a desirable selling power. The reimburse-ment fund is utilized by the PM in, for example, FIFO man-ner when a transaction fails.

3.3.1 BiddingWhen a buyer, cB , aims to bid for an item sold by cS , the

system presents several quantifiers to cB : αS , βS , and thecurrent pricing of all items sold by cS : πS =

P∀pi∈pS

pi.

3c. Wikipedia.4Tracing perpetrators in this case is easier than in fraudulenttransactions committed by sellers due to the undeniable avail-ability of buyer’s physical address.

Based upon these quantifiers, cB can decide upon the riskshe is willing to take while bidding on an item sold by cS

that would increase the total price of his offering to πnew.For example, if πnew > αS , cB can ask cS to increase his αS

by increasing his reimbursement fund so that she can bidcomfortably knowing that cS cannot make profits in casehe decides never to deliver the product. Similarly, cS caneliminate any risk in her bid by asking cB to set βS = πnew.

3.4 Time-Sharing Sales LimitsOn-line markets based upon reputation systems usually

consist of a few users who are predominantly sellers and theremaining majority of users who are predominantly buyers.Thus, we offer a supplemental algorithm for computing saleslimits with an objective to enable consumers establish higher(up to twice as large) sales limits at a risk. Higher saleslimits in the economy translate to increased selling power,hence higher profits for everyone involved.

Here, for a specific executed {P}-transaction t(cB , cS),cB and cS create an agreement to distribute the costs of t,Co(t), on-demand so that at any time:

αB(t) + αS(t) = Co(t), (8)

where αX(t) denotes a portion of the verifiable cost Co(t)for transaction t, that is used to build up the sales limitαX =

P∀t∈TX

αX(t) of cX . User cX participated in t eitheras a buyer or as a seller.

Under the agreement, if at a specific moment, only oneof the participants in t, say cB , is selling an item thenαB(t) = Co(t), αS(t) = 0. Note that this flexibility comesat risk for cS . If cB commits a fraudulent transaction andher sales limit gets affected while she was using more than12Co(t) to boost αB , the reduction in her sales limit may pro-

portionally, and possibly entirely, reduce the amount Co(t)shared by the two parties and thus, affect αS(t) as in (8).Consequently, when committing to t with time-shared costs,both participants agree to take on this risk. Since the rep-utation system offers preventive services against fraud, weanticipate that this risk is low and typically worth the in-creased selling power, in particular for new or infrequentsystem users.

3.4.1 Sales Limit ComputationWe now formally present how cB and cS time-share the

transaction cost Co(t). When a prospective buyer cD wantsto establish the sales limit of cS , the system displays:

αS = βS +X∀t∈TS

αS(t), (9)

where the values αS(t) are “grown” as much as possiblewithin each {P}-transaction t in TS with time-sharing ofsales limits. The costs of remaining transactions within TS

are accumulated as defined in Subsection 3.3. In the remain-der of this subsection, for simplicity and brevity, we assumethat all transactions in TS are time-shared and (∀S) βS = 0.We first define the following scalar:cαS =

X∀t∈TS

αS(t), (10)

and establish the goal of minimizing the potential market-wide profit from fraud:

R =X

S

πS − cαS . (11)

Since fraud is typically not a wide-spread phenomenon, thereexists demand to address it locally within the market net-work. Thus, we want to establish a set of rules that governthe fairness of the cost allocations, i.e., that should encour-age participants to use time-sharing by guaranteeing thatno participant can take a risk which is not proportional toher committed transactions.

Definition 3. The absolute fairness rule asserts thatfor every seller, cS, in the market:X

t∈TS

αS(t)−min

8<:πS ,1

2

Xt∈TS

Co(t)

9=; ≥ 0. (12)

In other words, absolute fairness guarantees to each userthat her sales limit will be built over time on-demand andwill equal at least one half of the sum of costs for all hercommitted transactions. We now show how to compute thevalues αS(t) with absolute fairness while minimizing R.

Consider a flow network G(V, E) with a single sourcevsource and a single sink vsink. Each market user, cS , isassigned a node, vS , which is connected to vsource with anedge of capacity πS . For each transaction, t(cS , cB), we con-struct a node vS,B and add two edges of infinite capacity:one from vS to vS,B and another from vB to vS,B . Finally,we connect each transaction node to vsink with an edge ofcapacity Co(t(cS , cB)). Figure 2 illustrates an exemplaryflow network with four participants and five transactions.

A legitimate sharing of the cost of a transaction, Co(t),between the seller and the buyer is such that the sum ofshared sales limits obeys Equation (8).

Lemma 1. Let f be a maximum flow in G(V, E). SettingαS(t(cS , cB)) to be the flow in f from vS through the edge(vS,B , vsink) is a legitimate set of values that minimizes R.

Proof. First, the proposed values for αS(t) are legal val-ues as for every transaction, t(cS , cB), the flow through theedge (vS,B , vsink) is at most Co(t).

Now, every set of legitimate values to αS(t) is also a legalflow in the network as αS(t(cS , cB)) can be pushed from thesource through vS and vS,B to the sink. As

Pt∈TS

αS(t) ≤πS and αS(t) + αB(t) ≤ Co(t) the flow in each edge is lessthan or equal to its capacity. Assuming by contradictionthat there is a legitimate set of values αS(t) = ζS(t) suchthat

PS

Pt∈TS

ζS(t) > f(vsource, vsink), is in contrast to fbeing the maximum flow in the network.

Maximal flow algorithms in networks with a single sourceand sink run in polynomial time in the network size. Com-plexity O(|V |2 log(|V |2/|E|)) is achieved by the push-relabelalgorithm that uses dynamic trees [20]. In our application, Gis constructed so that the number of edges is of order |E| =O(|V |). Therefore, the push-relabel and the Dinic’s block-ing flow algorithm with dynamic trees, [21], have an overallcomplexity of O(|V |2 log(|V |)). A recent result by Goldberg

and Rao reduces the complexity to O(|V |3/2 log(|V |)) [22].According to Lemma 1, we know how to minimize R in poly-nomial time, and we are left with the problem of making themaximal flow obey the absolute fairness rule.

We address this problem near-optimally as follows. First,we replicate K times each transaction node in G and as-sign K times lower capacities to the edges going from thereplicated nodes to the sink. In this new network, we run

the classical max-flow algorithm by Edmond and Karp, [23],in a randomized manner. Once a specific participant cS isselected as the first node in a shortest path through theresidual network of G, we exclude its corresponding node,vS , from the randomized round-robin until every node, vX ,that has participated in a transaction with cS and such that(vsource, vX) is not saturated, is visited at least once. Thestructure of the network helps us reduce the time neededto execute this algorithm as most shortest paths from thesource to the sink are of length 3, with each of them satu-rating either an edge from source to a participant’s node, oran edge from a transaction node to the sink.

Moreover, after solving the max-flow problem in the net-work once, only a marginal computational effort is neededto account for possible new events: adding an item for sale,successful new transaction, and a fraud which results inremoving a participant and all of her previous committedtransactions from the network.

�������

��

�����

��

��

�� ��� ��

����

����

���

������� �����

Figure 2: An example of a flow network with fourparticipants and five transaction nodes.

• Adding an item for sale by cS results in the in-crease of πS . If αS < πS before this event, then nooperation is needed. Otherwise, the algorithm thencontinues pushing the additional flow through the net-work via at least one additional iteration.

• Successful new transaction t(cS , cB) results in re-ducing πS and adding a transaction node and its Kreplicas. If αS > πS after this event, we push backαS − πS flow from the sink through the transactionnodes connected to vS . The algorithm then continuesto distribute the flow through the network using atleast one additional iteration.

• Fraud by participant cS results in removing thenode vS and all her transactions nodes t(cS , cX) fromthe network. This can negatively affect any participantcX and her portion of the flow in t(cS , cX) should bepushed back.

In a typical on-line market, the continuous chain of trans-actions would result in an application of one of the previousthree steps in an iterative manner. The absolute fairnessis positively correlated with K, however at higher overheadto performance. Regardless of K, iteratively computing themaximum flow in the network after the above-mentionedevents, negatively affects fairness by slowly accumulatingfairness errors. To address this problem, we propose fullcomputation of the maximum flow after a specific number ofevents in the network. In addition, if one wishes to compro-mise optimality for the sake of fast computation, the length

of the shortest-path from source to sink can be limited toa constant, resulting in a linear time algorithm, which ina lightly constrained network could prove to be an efficientnear-optimal option.

For large networks which could be potentially decentral-ized, it is important to consider algorithms with constantcomplexity or sub-linear in the size of the network. The keyto building such algorithms is approaching absolute fairnessin sub-optimal but localized manner. Here is an example ofa localized cost redistribution algorithm:

α′S(t) =

8<: 12Co(t) , πS ≥ 1

2US

12Co(t)− 1

|TS |�

12US − πS

�, else

(13)

where US =P

t∈TSCo(t). We locally utilize the available

costs on per-transaction basis by setting αS(t) = Co(t) −α′B(t) for each transaction t(cS , cB) such that πS ≥ 1

2US

or αS(t) = α′S(t) otherwise. The algorithm could be gen-eralized so that a certain neighborhood to cS and cB isexposed to further iterative redistribution. Localized algo-rithms allow for fast local updates of values when an eventoccurs. These updates involve only the value of a partic-ipant cS and her neighbors with whom cS has committed{P}-transactions with time-sharing of sales limits.

Alternative fairness rules could be used while time-sharingsellers’ sales limits, depending on the interests of the PMand the type of risk that it wishes to impose over marketparticipants. One such example is the max-min fairness,in which the minimum cαS that a participant cS achievesis maximized; secondly, the second lowest dαS′ that a par-ticipant cS′ achieves is maximized, etc. Max-min fairnessallows setting cαS proportional to πS , thus encouraging highlevel of transactions by some participants, while others, withlower π take on more risk. For brevity and simplicity, wedo not propose any specific max-min fairness algorithms inthis paper. However, we do note that minimizing R un-der this condition can be done using the already proposedflow network, G, with an optimization goal to maximize amulti-commodity flow, where every πS is considered a dis-tinct commodity and every edge (vS,B , vsink) can transferonly two commodities. The problem of maximal flow inmulti-commodity networks is notoriously hard and the bestknown solution to the problem is a (1− ε)−3-approximationwhich takes O(ε−2|E|2 log |E|) time [24].

3.4.2 DiscussionFrom the perspective of the adversary, time-sharing sales-

limits could present an opportunity. By purchasing mer-chandise worth X monetary units, the adversary gains amaximum sales limit equal to the sum of all transactionfees, e.g., at eBay this amounts to X

20, for these purchases.

As we speculate that it is unlikely for an adversary to spendtwenty units of her own wealth prior to gaining one unitof fraudulent profit, we conclude that time-sharing is an ef-fective mechanism to at most double the selling power ofeconomic entities in the market. In addition, the proposedreputation system could enable pricing time-sharing of saleslimits, and thus reach an equilibrium for the risk vs. profitfrom time-sharing sales limits in a market.

From the theoretical point of view, one could evaluatethe sensitivity of the increase in sales limits depending onthe market constraint: average current offering, π, vs. av-

erage sum of fees from previous committed transactions, α.We assume that this analysis is of little practical impor-tance for several reasons. First, in a well-established mar-ket, α À π. Second, market networks are typically scale-freewith a small group of “frequent sellers” generating large por-tion of the flow in the network. These nodes typically havean offering that is substantially smaller than their currentsales limit. Thus, we expect that most of the demand forincrease in sales limits from smaller sellers, will be sourcedout from the “frequent sellers.” Consequently, we anticipatethat most sellers who demand an increase in their sales lim-its will highly likely succeed to double them with no adverseeffects on the trade in the global market. To that extent,we do not present an experimental study on the efficacy ofthe proposed algorithms for time-sharing of sales limits.

3.5 SummaryIn summary, the proposed algorithms for computing sales

limits in a reputation network use “transaction losses” suchas shipping and handling, insurance, protocol manager fees,etc., not transaction totals, to build up a value that quan-tifies user trustworthiness. Positive feedback, as it couldbe easily fabricated, does not cause that the value of thesold item affects seller’s sales limit. By proposing a suite ofalgorithms that trade-off certain risk and optimized sellingpower with trustworthiness, we address the market demandfor robust trades. Most importantly, our system is the firstto offer deterministic guarantees to buyers in generallydistrusted markets – as a consequence our system facilitatestrade, offers new risk-taking opportunities, and should boostmarket pricing due to increased system security.

4. SELLER’S FRAUD MODELOne disadvantage of deterministic reputation is the con-

formation towards the worst case. Since fraud is costly butstill not frequent, we speculate that risk assessment tech-nologies are still of value, in particular when bootstrappingthe economic activity in the market. For instance, considerthe scenario when a buyer, cB , aims to bid for an item soldby cS . In this case, the model presented in the previoussection provides guarantees to cB and as long as the priceshe offers, π, satisfies π ≤ βS , cB cannot be defrauded. Ifπ > βS , cB can ask cS to increase his reimbursement fund.However, if cS does not have the resources necessary to in-crease his reimbursement fund, cB may not be willing toplace a higher bid due to an increased likelihood of fraud.Such a system may end up in a bargaining impasse whichis, on the average, a loss for all participants in the economicsystem. In order to facilitate bargaining through risk as-sessment, we introduce a novel price-dependent probabilisticreputation system. As a crucial component of this system,we introduce an additional reputation quantifier which werefer to as the seller’s fraud model.

Definition 4. Seller’s Fraud Model is defined as aprobability γS(pS) that a seller cS decides to defraud hercurrent buyers based upon the pricing pS of her product of-fering. The model is quantified using a function f():

γS(pS) = Pr[cS commits fraud|pS ] (14)

= f(πS − αS) = f

24 Xpi∈pS

pi − αS

35

over the profit that cS would create if she would disappearfrom the market after charging for all listed products pS.

Before bidding for a product at a certain price, the buyerwould be presented with a model that estimates the proba-bility of a fraudulent transaction given the current offeringof the seller and its pricing. The tool would offer normal-ized risk assessment based upon f() trained on empiricalmarket data. There exist numerous possibilities for creatingefficient user interfaces to deliver the resulting probability,e.g., buyer would enter considered price into an HTML formfield to observe the probability in question using a graphicaldisplay such as a pointer to a log γ-scale.

An example of an expected γ-model is illustrated in Fig-ure 3. We observe that the probability is zero5 for πS ≤ βS ,approximately zero for βS < πS ≤ αS , and increasing inphase-transition style with the increase of seller’s profits.The resulting probability converges towards Γ, the proba-bility that one person commits fraud regardless of payout.This convergence is typical for any practical range of pricesover πS − αS .

�� ��� �

Figure 3: An example of the seller’s fraud model.

4.1 DiscussionInterestingly, our γ-model does not consider the counts of

positive and negative responses from previous customers norany topology analysis of the transaction graph G – tools typ-ical for traditional reputation systems. It does not need to.The fact that seller’s existence in the economic ecosystem isreduced and accurately presented with only two parameters,α and β, renders other details about previous transactions,such as structure of the reputation tree, irrelevant. The factis that a fraudulent seller has only one objective – to max-imize profits during his existence in the on-line market. Inmost realistic scenarios this objective is amended to the de-sire to slip past detectors that would trigger criminal inves-tigation. From that point of view, the only statistic whichis crucial is the probability that, given a specific payout, theseller decides to fool her current buyers.

Obtaining function f() empirically could be a difficulttask. The problem lies in the fact that not all sellers areequal and some form of seller classification could be required.For example, sellers coming from countries with drasticallydifferent income levels should have different financial mo-tives to commit a fraudulent transaction. Considering thesefacts, it is more realistic to expect that sellers are classified

5In the most strict sense this value is zero only approximatelydue to unexpected events that could prevent the seller from com-pleting the transaction and that cannot be considered fraud.

to fit different behavioral models. Based upon seller’s classi-fication, the PM would select the appropriate γ-model andpresent it to prospective buyers. Classification algorithmshave been researched well in several subfields of computerscience, hence we point the interested reader to review someof the related work [11]. For brevity and simplicity, we donot focus on this aspect of our technology.

Finally, users who chose to use time-sharing of their trans-action costs would use the γ-model in the same fashion asconservative sellers. Here, the risk is not only exhibited bythe buyer but by the collaborating sellers as well. To issue awarning to a prospective buyer about the additional risk, thereputation system could show both the conservative and thetime-shared α quantifier. Then, the buyer, fully informed,can assess the true risk and proceed with the pricing.

5. EMPIRICAL ANALYSISIn this section, we present results from our empirical study.

First, we describe the sampling method used to obtain asnapshot of real-world economic activity. Then, we presentkey statistics about our snapshot, followed by the main re-sult: a seller’s “fraud” model empirically obtained from realtransactions. We built our datasets from public informationavailable on existing on-line marketplaces. We remind thereader that payment channels available on existing on-linemarketplaces are typically left up to user’s free choice andare thus unsupervised. Based on the data we reviewed, inour empirical study we used only PM fees to build sellers’sales limits. Other transaction fees such as shipping andtaxes, were not included in the construction of sales lim-its as modern on-line marketplaces typically do not receivereceipts for such services.

5.1 Marketplace SamplingWe did not conduct our empirical study using a sampling

technique that would model a marketplace network as accu-rately as possible. The primary objective of our work is topresent an estimate of the seller’s fraud model, f() – and notto provide accurate modeling of marketplace networks. Truerandom sampling of a large network is difficult as randomwalks, one of the most dominant techniques for this task,do not capture graph statistics accurately as they tend tovisit well-connected nodes more often. In relatively sparsenetworks, even techniques that aim at uniform graph sam-pling are not sufficiently precise [19]. We acknowledge thatit is difficult to create a sample of a large marketplace net-work that would precisely correspond to the true activity onits ecosystem and conclude that accurate statistics could beprovided only by the marketplaces themselves.

Still, for the purpose of validating the ideas proposed inthis paper, we decided to sample marketplaces with an ob-jective of covering as many as possible sellers with no partic-ular browsing objective (i.e., a greedy max-cardinality sub-set). Thus, we hoped that the statistics of a large subnetwould provide an insight into quantifying f(). We followeda simple approach for extracting information from a givenmarketplace Web-site. We implemented a Web-crawler thatautomated the process, comprised of two stages.

In the first data collection stage, by starting with aarbitrarily chosen user, cS , we would gather her key statis-tics: both positive and negative fan-in and fan-out and thecurrent product offering, pS . Extracting πS was sped upby recording only the first 200 products of seller’s current

product offering as reported by the marketplace. As thetotal number of items for sale was known, we extrapolatedπS based upon the average price of the first 200 products.All monetary values listed in Euro and Pound sterling, wereconverted to US dollars. For simplicity, we ignored transac-tions credited with other currencies.

In the second traversal stage, we performed a breadth-first search of the user’s feedback pages, collecting informa-tion about every single transaction she performed, negativeor not. For each transaction, we collected the anonymizedusers’ IDs, the related transaction amounts with associatedtime-stamps, and the type of feedback reported (positive,negative or neutral). The recursive traversal was done inbreadth-first manner with no exceptions.

5.2 Snapshot StatisticsDuring our sampling of on-line marketplaces, we collected

data on 10,096,731 transactions worth over US$270M. OurWeb-crawler downloaded and parsed a total of 140,000 Web-pages, collecting transaction information for a large num-ber of anonymized user accounts. We partitioned the typeof users encompassed by our extracted subnets as completeand incomplete. For the first class, we focused on comput-ing/estimating the information on their πi and αi. Thus, weextracted their full set of transactions. The cardinality ofthe set of complete users was 44,830. In order to record alltheir transactions we needed to include information aboutthe incomplete users, i.e., users who have participated in atleast one transaction with a complete user. The total num-ber of incomplete users in our dataset was 5,274,759. Trans-actions between two incomplete users were not included inthe subnet. Thus, the total size of the extracted subnet was5,319,589 accounts. The statistics about the fan-in and fan-out of nodes in the subnet are presented in Figure 4. Onecan observe, as expected, that a smaller number of sellers isgenerating large number of transactions in the marketplaces.

101

102

103

104

105

106

100

102

104

106

Number of connections

Num

ber

of a

cco

unts

Fan-in (buyer)

Fan-out (seller)

Figure 4: Histogram of the number of transactionsnodes have achieved as buyers and sellers within thecaptured subnet.

Figures 5 and 6 describe the number of accounts in oursample that had a specific ratio and total number of trans-actions with negative feedback respectively. We observe,as expected, that most sellers do not initiate transactionsthat result in negative feedback. However, within the massof users, we observe that there is a relatively large group ofusers who have recently generated substantial volume of neg-

10-2

10-1

100

100

102

104

Ratio of transactions with negative feedback

Nu

mb

er o

f acc

ount

s

Figure 5: Histogram of the ratio of transactions thatresulted in negative feedback within the capturedsubnet.

0 100 200 300 400 500 600 700 800 900

100

102

104

Number of transactions with negative feedback

Num

ber

of a

cco

unt

s

Figure 6: Histogram of the number of transactionsthat resulted in negative feedback within the cap-tured subnet.

ative feedback. Figure 7 illustrates the histogram of pricesfor products exchanged in transactions that resulted in neg-ative feedback within the captured subnet. This plot is in-teresting as it points to a relatively low price of merchandisethat is sold to the dissatisfaction of buyers – it also suggeststhat sellers aim at combining smaller profits from severaltransactions with negative feedback. In the subsequent sub-section we conclude that our “fraud model” is particularlytailored to address this type of malicious activity.

Finally, we point to Figure 8 which plots the ratio of trans-actions that involve a product of certain price. We plottwo sets of datapoints: for all transactions and for trans-actions with negative feedback only. One can observe fromthe curves that the pricing of merchandise that results in atransaction with negative feedback typically has similar pric-ing to positive-feedback transactions. This is reasonable asfraudulent transactions usually have “fair” or slightly lowerpricing over the product bait in order to attract buyers.

5.3 Seller’s “Fraud” ModelFrom the sampled subnet with approximately 10 million

transactions we have constructed a statistical model for ourseller’s fraud model presented in Section 4. We must con-firm that on-line marketplaces typically do not report ac-tual fraudulent transactions on their Web-sites, rather re-port negative feedback. Therefore, to be more precise, westress that our model represents accurately seller’s nega-tive feedback data from the sample. We speculate that itis strongly correlated to the actual fraud model – certainly,

100

101

102

103

104

105

100

101

102

103

104

Product price for transactions with negative feedback

Nu

mb

er o

f ca

ses

Figure 7: Histogram of prices for products ex-changed in transactions that resulted in negativefeedback within the captured subnet.

100

102

104

106

10-6

10-4

10-2

Price of product in transaction [$]

Rat

io o

f tra

nsa

ctio

ns

Negative feedback

All transactions

Figure 8: Ratio of transactions that result in nega-tive and any feedback vs. the price of the productwithin the captured subnet.

significant rise in the pdf of the obtained f() model occursat approximately the same value as the costs of fraudulenttransactions reported to NFIC and IC3.

Figure 9 presents a set of points that correspond to thelog-fraction of transactions for which πi − αi correspondedto the abscissa, for which sellers received negative feedback.We have also provided a 6-th degree intrapolant for the col-lected data that outlines in a visually clear manner the prob-ability of interest. Note that the curve is resemblant of thegeometry anticipated in Figure 3. The variance of the in-trapolated curve is greater at higher amounts (>US$10,000)due to lack of data. Finally, the estimated model confirmsthe speculation that the probability of a fraudulent transac-tion rises strongly at πi − αi > US$1000 and reaches morethan 10 times higher values at πi−αi ≈ US$30K comparedto transactions that executed when πi − αi < US$1000. In-forming consumers about this trend is the least that couldbe done, while the proposed remedies such as presenting abuyer with seller’s {αi, βi} parameters would likely furtherimprove robustness to fraudulent activity in on-line mar-ketplaces. Since systems that aim at preventing fraud bypricing reputation commonly demand economic bootstrap-ping, in Section 3.4 we have proposed a technique for sharingreputation among sellers that boosts their selling power atcertain quantifiable risks.

100

101

102

103

104

105

-4.5

-4

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

πi - αi

[$]

log 10

( γ)=

log

10[f(

π i- αi)]

Actual datapoints

6th degree polynomial interpolant

Figure 9: The seller’s “fraud,” i.e., negative feedback, model extrapolated from 10 million sampled transac-tions within the captured subnet.

6. SUMMARYIn this paper, we propose a methodology that aims at

pricing reputation from seller’s perspective. Buyers need notevaluate sellers’ transaction trees and seek whether previoustransactions were fabricated. We price reputation at threelevels easily accessible to a common user:

• a limit on pricing that guarantees that the buyer willreceive reimbursement in case seller commits fraud,

• a limit on pricing that states that the seller will notgenerate any profits from her existence in the market-place if she commits fraud on all her current productofferings and disappears from the economic ecosystem,

• an user interface that can quantify for a buyer the riskof a fraudulent transaction when placing a higher pricethan the previous two limits.

In our system, even in the case of stolen identity, an adver-sary cannot produce illegal profit unless a buyer decides topay over the suggested sales limits. We obtained relativelylarge subnets from actual on-line marketplaces in order toempirically quantify the key parameters in our scheme anddemonstrate how it could be efficient if placed in an existingon-line marketplace.

7. REFERENCES[1] EBay Inc. http://www.ebay.com.[2] Chat with Rob Chesnut, Vice President of eBay’s Trust &

Safety Dept. http://pages.ebay.com/event/robc.[3] Internet Crime Complaint Center. Internet Crime Report for

2006. http://www.ic3.gov/media//annualreport/2006_IC3Report.pdf.

[4] Ebay Inc. Annual report 2006.http://investor.ebay.com/annuals.cfm.

[5] The Amazon Marketplace. http://www.amazon.com.[6] S. Abernethy. Building Large-Scale Online Dispute

Resolution & Trustmark Systems. UNECE Forum on ODR,2003.

[7] E. Katsh and L. Wing. Ten Years of Online DisputeResolution (ODR): Looking at the Past and Constructingthe Future. The University of Toledo Law Review, Vol.38,(no.1), pp.19–47, 2006.

[8] National Fraud Information Center. Top 10 Internet ScamTrends from NCL’s Fraud Center, 2006.http://fraud.org/stats/2006/internet.pdf.

[9] I. MacInnes. Causes of Disputes in Online Auctions.Electronic Markets, Vol.15, (no.2), pp.146–157, 2005.

[10] E. Katsh, et al. E-commerce, E-disputes, and E-disputeResolution: In the Shadow of “eBay Law.” Ohio State J. ofDispute Resolution, Vol.15, (no.3), pp.705–734, 2000.

[11] L. Brieman, et al. Classification and Regression trees.Wadsworth & Brooks, 1984.

[12] S.D. Kamvar, et al. The EigenTrust Algorithm forReputation Management in P2P Networks. WWW, 2003.

[13] L. Xiong, et al. PeerTrust: Supporting Reputation-BasedTrust in P2P Communities. IEEE Trans. on Knowledge andData Engineering, Vol.16, (no.7), 2004.

[14] G. Swamynathan. Reputation Management inDecentralized Networks. Technical report, UCSB, 2007.

[15] EBay dispute resolution. http://pages.ebay.com//services/buyandsell/disputeres.html.

[16] I. MacInnes. Understanding Disputes In Online Auctions.eCommerce Conference, 2004.

[17] C. Dellarocas. Immunizing Online Reputation ReportingSystems against Unfair Ratings and DiscriminatoryBehavior. ACM EC, 2000.

[18] F. Cornelli, et al. Choosing Reputable Servents in a P2PNetwork. WWW, 2002.

[19] M.R. Henzinger, et al. On near-uniform URL sampling.International World Wide Web Conference on ComputerNetworks, 2000.

[20] A.V. Goldberg and R.E. Tarjan. A new approach to themaximum flow problem. ACM STOC, 1986.

[21] E.A. Dinic. Algorithm for solution of a problem ofmaximum flow in networks with power estimation. SovietMath. Doklady, 1970.

[22] A.V. Goldberg and S. Rao. Beyond the flow decompositionbarrier. J. of the ACM, Vol.45, (no.5), pp.783–797, 1998.

[23] J. Edmonds and R.M. Karp. Theoretical improvements inalgorithmic efficiency for network flow. J. of the ACM,Vol.19, (no.2), pp.248–264, 1972.

[24] M. Allalouf and Y. Shavitt. Centralized and DistributedApproximation Algorithms for Routing and WeightedMax-Min Fair Bandwidth Allocation. IEEE Workshop onHigh Performance Switching and Routing, 2005.

Appendix – Related WorkThe open and anonymous nature of popular on-line marketssuch as eBay [1] and Amazon.com [5] makes them suscepti-ble to numerous adversarial activities. Reputation has beenwidely accepted as means of establishing trust among par-ticipants to prevent malicious use.

eBayEBay employs a simple feedback-based trust system. Aftereach transaction, both the buyer and the seller can rate eachother by assigning one of three possible ratings: satisfactory,neutral, and unsatisfactory. These ratings build up a feed-back score (total number of positive transactions minus thetotal number of negative transactions) which serves as an in-dicator of users’ transaction histories; and, could representa valuable warning to new users who wish to interact withrated users. The higher the feedback score, the higher thereputation of the user. In addition, for each user, eBay re-ports the number of her transactions both as a buyer and asa seller, the percentage of positive feedback, and the actualfeedback by the users with whom the user interacted. Usingsuch information, users can decide whether to participate ina transaction with a specific user and also evaluate its risk.In [17], Dellarocas presents a survey of game theoretic andeconomics models for reputation management. Dellarocasalso showed that reputation systems, which are only basedon the sum of negative and positive ratings, are vulnerableto unfair rating attacks.

Reputation in Peer-to-Peer SystemsSeveral techniques have been proposed to quantify reputa-tion in peer-to-peer systems. A survey of several existingmethods can be found in [14]. P2PRep proposed by Cornelliet al. [18], focuses on providing a framework for reputationmanagement without giving an explicit definition of a trustmetric. EigenTrust [12] presents a method to minimize theimpact of malicious peers on the performance of feedback-based reputation systems. EigenTrust associates each peerwith a global trust value calculated using the left princi-pal eigenvector of a matrix of normalized local trust values.Thus, EigenTrust takes into account the transaction historyof the user to get his/her reputation score, and uses suchinformation to identify malicious users. EigenTrust also in-troduces the notion of transitive trust: if a peer A trustsany peer X, it would also trust the peers trusted by X.PeerTrust [13] computes reputation scores as a function offive factors: the feedback obtained from other peers, totalnumber of transactions, credibility of the feedback source,transaction context factor, and community context factor.The transaction context factor helps model such transac-tion properties as the size of the transaction, category, andtime-stamp, and the community context factor helps provideincentives to obtain feedback.


Recommended