+ All Categories
Home > Documents > Modeling choice and valuation in decision experiments.

Modeling choice and valuation in decision experiments.

Date post: 01-Oct-2016
Category:
Upload: graham
View: 214 times
Download: 0 times
Share this document with a friend
23
Modeling Choice and Valuation in Decision Experiments Graham Loomes University of Warwick This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known “regularities” observed in those experiments may arise from a tendency for participants to perceive probabilities and payoffs in a particular way. This model organizes more of the data than any other extant model and generates a number of novel testable implications which are examined with new data. Keywords: relative judgment, similarity, decision anomalies, intransitivity This article develops a parsimonious descriptive model of risky choice—the perceived relative argument model (PRAM)—that can organize a good deal of the most influential experimental evidence of systematic departures from conventional decision the- ory. Focusing on the kind of tasks which constitute much of the evidence—that is, choices between pairs of lotteries involving no more than three outcomes and/or valuations of such lotteries—it is shown that individuals who behave according to PRAM are liable to violate all but one of the key axioms of rational choice, the only exception being transparent dominance. The article is organized as follows: Section 1 sets up the basic framework. Section 2 models the perception of probabilities and shows that one simple proposition about the way that probabilities are handled is enough to ensure that the axioms of independence, betweenness, and transitivity are all bound to fail in one way or another. This section identifies a number of predicted regularities which are at odds with those rank-dependent models that are currently regarded as offering the best alternative to standard expected utility theory. Section 3 models an analogous proposition about the way that payoffs are perceived, and this allows the model to explain a number of other regularities which cannot be accom- modated by expected utility theory or any of its main rivals. Section 4 discusses the relationship between PRAM and a number of other axiomatic or behavioral models which have attempted to organize various subsets of regularities. Section 5 considers results from a fresh experiment designed specifically to examine various ways in which PRAM differs from the models which currently dominate the literature. Section 6 concludes. 1. The Modeling Framework Before the outline of the particular framework for this model, here are two remarks: First, PRAM is essentially a descriptive model, intended to show how some very simple propositions about perception and judgment can explain many well-known systematic departures from standard theory—and predict some new ones. To this end, the model is specified in a particular form from which various impli- cations are derived. However, it is important to keep in mind that this is a model of decisions often made quite quickly 1 and on the basis of perceptions rather than after long deliberation involving complex calculation. The structure of the model is therefore in- tended to capture tendencies in the ways perceptions are formed and judgments are made: it is not suggested that people actually make calculations strictly according to the formulae but, rather, that the formulae capture key features of the ways in which decision parameters are perceived and processed. Second, the exposition makes several simplifying assumptions. In particular, although actual responses are susceptible to noise and error, the exposition abstracts from that and presents a determin- istic model. 2 It also abstracts from failures of procedure invariance and framing effects (Tversky & Kahneman, 1986). Such effects undoubtedly influence behavior, but the claim being made in this article is that we can explain many regularities without needing to invoke those additional effects. On the other hand, the model rests 1 Many experiments ask participants to make numerous decisions within single sessions, and once they become familiar with the tasks, many participants spend only a few seconds on each one: For example, Moffatt (2005) analyzed a pairwise choice dataset where mean decision times mostly ranged between 3 s and 8 s per choice. This may be a somewhat extreme case, but it would not be uncommon to find the great majority of participants taking no more than 15 s or 20 s to process most of the kinds of decisions presented in many choice/valuation experiments. 2 Sections 4 and 6 discuss (briefly) the question of extending the model to allow for the stochastic nature of decision behavior. As noted above, we are dealing with data generated quite quickly and somewhat impression- istically, and it would be surprising if there were not some stochastic component in such data; but the model abstracts from that and focuses on what may be regarded as central tendencies. My thanks to Michele Bernasconi, Han Bleichrodt, David Buschena, Colin Camerer, Robin Cubitt, Enrico Diecidue, Andrea Isoni, Jon Leland, George Loewenstein, Jose Luis Pinto, Uzi Segal, Chris Starmer, Robert Sugden, Kei Tsutsui, and Peter Wakker and to participants in workshops/ seminars in Arizona, Carnegie Mellon, Paris 1, University of East Anglia, Venice, Warwick, and York for their helpful comments and suggestions. I am grateful to the Centre for Behavioral and Experimental Social Science at the University of East Anglia for resources and facilities to carry out the experiment reported here. Correspondence concerning this article should be addressed to Graham Loomes, Department of Economics, University of Warwick, Coventry, United Kingdom CV4 7AL. E-mail: [email protected] Psychological Review © 2010 American Psychological Association 2010, Vol. 117, No. 3, 902–924 0033-295X/10/$12.00 DOI: 10.1037/a0019807 902
Transcript

Modeling Choice and Valuation in Decision Experiments

Graham LoomesUniversity of Warwick

This article develops a parsimonious descriptive model of individual choice and valuation in the kindsof experiments that constitute a substantial part of the literature relating to decision making under risk anduncertainty. It suggests that many of the best known “regularities” observed in those experiments mayarise from a tendency for participants to perceive probabilities and payoffs in a particular way. Thismodel organizes more of the data than any other extant model and generates a number of novel testableimplications which are examined with new data.

Keywords: relative judgment, similarity, decision anomalies, intransitivity

This article develops a parsimonious descriptive model of riskychoice—the perceived relative argument model (PRAM)—thatcan organize a good deal of the most influential experimentalevidence of systematic departures from conventional decision the-ory. Focusing on the kind of tasks which constitute much of theevidence—that is, choices between pairs of lotteries involving nomore than three outcomes and/or valuations of such lotteries—it isshown that individuals who behave according to PRAM are liableto violate all but one of the key axioms of rational choice, the onlyexception being transparent dominance.

The article is organized as follows: Section 1 sets up the basicframework. Section 2 models the perception of probabilities andshows that one simple proposition about the way that probabilitiesare handled is enough to ensure that the axioms of independence,betweenness, and transitivity are all bound to fail in one way oranother. This section identifies a number of predicted regularitieswhich are at odds with those rank-dependent models that arecurrently regarded as offering the best alternative to standardexpected utility theory. Section 3 models an analogous propositionabout the way that payoffs are perceived, and this allows the modelto explain a number of other regularities which cannot be accom-modated by expected utility theory or any of its main rivals.Section 4 discusses the relationship between PRAM and a numberof other axiomatic or behavioral models which have attempted toorganize various subsets of regularities. Section 5 considers resultsfrom a fresh experiment designed specifically to examine variousways in which PRAM differs from the models which currentlydominate the literature. Section 6 concludes.

1. The Modeling Framework

Before the outline of the particular framework for this model,here are two remarks:

First, PRAM is essentially a descriptive model, intended toshow how some very simple propositions about perception andjudgment can explain many well-known systematic departuresfrom standard theory—and predict some new ones. To this end, themodel is specified in a particular form from which various impli-cations are derived. However, it is important to keep in mind thatthis is a model of decisions often made quite quickly1 and on thebasis of perceptions rather than after long deliberation involvingcomplex calculation. The structure of the model is therefore in-tended to capture tendencies in the ways perceptions are formedand judgments are made: it is not suggested that people actuallymake calculations strictly according to the formulae but, rather,that the formulae capture key features of the ways in whichdecision parameters are perceived and processed.

Second, the exposition makes several simplifying assumptions.In particular, although actual responses are susceptible to noise anderror, the exposition abstracts from that and presents a determin-istic model.2 It also abstracts from failures of procedure invarianceand framing effects (Tversky & Kahneman, 1986). Such effectsundoubtedly influence behavior, but the claim being made in thisarticle is that we can explain many regularities without needing toinvoke those additional effects. On the other hand, the model rests

1 Many experiments ask participants to make numerous decisions withinsingle sessions, and once they become familiar with the tasks, manyparticipants spend only a few seconds on each one: For example, Moffatt(2005) analyzed a pairwise choice dataset where mean decision timesmostly ranged between 3 s and 8 s per choice. This may be a somewhatextreme case, but it would not be uncommon to find the great majority ofparticipants taking no more than 15 s or 20 s to process most of the kindsof decisions presented in many choice/valuation experiments.

2 Sections 4 and 6 discuss (briefly) the question of extending the modelto allow for the stochastic nature of decision behavior. As noted above, weare dealing with data generated quite quickly and somewhat impression-istically, and it would be surprising if there were not some stochasticcomponent in such data; but the model abstracts from that and focuses onwhat may be regarded as central tendencies.

My thanks to Michele Bernasconi, Han Bleichrodt, David Buschena,Colin Camerer, Robin Cubitt, Enrico Diecidue, Andrea Isoni, Jon Leland,George Loewenstein, Jose Luis Pinto, Uzi Segal, Chris Starmer, RobertSugden, Kei Tsutsui, and Peter Wakker and to participants in workshops/seminars in Arizona, Carnegie Mellon, Paris 1, University of East Anglia,Venice, Warwick, and York for their helpful comments and suggestions. Iam grateful to the Centre for Behavioral and Experimental Social Scienceat the University of East Anglia for resources and facilities to carry out theexperiment reported here.

Correspondence concerning this article should be addressed to GrahamLoomes, Department of Economics, University of Warwick, Coventry,United Kingdom CV4 7AL. E-mail: [email protected]

Psychological Review © 2010 American Psychological Association2010, Vol. 117, No. 3, 902–924 0033-295X/10/$12.00 DOI: 10.1037/a0019807

902

on just two basic propositions involving just two free parameters,and it would be surprising if this were sufficient to account for allof the many regularities observed in the relevant class of decisionexperiments. But that is not the claim. This is not a theory ofeverything. And, as we make clear in due course, there is at leastone seemingly systematic effect not accounted for by this two-parameter model. Nevertheless, the two basic components of thepresent model combine to organize many more of the knownregularities than any other single model currently available and, inthe course of doing so, help us to identify where—and moreimportantly, perhaps, why—those other models are liable to fallshort.

Having made those preliminary points, let us now turn to theformulation of the model. The bulk of the experimental data usedto test theories of risk come from decisions that can be representedin terms of pairs of alternative lotteries, each involving no morethan three monetary payoffs. Figure 1 shows a basic template forsuch cases. Payoffs3 are x3 � x2 � x1 � 0, and the probabilitiesof each payoff under the (safer) lottery S are, respectively, p3, p2,and p1, whereas the corresponding probabilities for the (riskier)lottery R are q3, q2, and q1, with q3 � p3, q2 � p2, and q1 � p1.

Although this template is broad enough to accommodate anypairwise choice involving up to three payoffs, the great majority ofexperimental tasks involve simpler formats—most commonly,those where S is a sure thing (i.e., where p2 � 1) or else where Sis a two-payoff lottery being compared with a two-payoff Rlottery. As we see later, it is also possible to analyze various simpleequivalence tasks within this framework. But the initial focus is onpairwise choice.

Any such choice can be seen as a judgment between twoarguments pulling in opposite directions. The argument in favor ofR is that it offers some greater chance—the difference between q3

and p3—of getting x3 rather than x2. Against that, the argument infavor of S is that it offers a greater chance—in this case, thedifference between q1 and p1—of getting x2 rather than x1.

Most decision theories propose, in effect, that choice depends onthe relative force of those competing arguments. For example,under expected utility theory (EUT), the advantage that R offersover S on the payoff dimension is given by the subjective differ-ence between x3 and x2—that is, u(x3) � u(x2), where u(.) is a vonNeumann-Morgenstern utility function—which is weighted by theq3 � p3 probability associated with that advantage. Correspond-ingly, the advantage that S offers over R is the utility differenceu(x2) � u(x1), weighted by the q1 � p1 probability associated withthat difference. Denoting strict preference by � and indifferenceby �, EUT entails

S���

R N �q1 � p1��u� x2� � u� x1����

�q3 � p3��u� x3�

� u� x2�. (1)

Alternatively, Tversky and Kahneman’s (1992) cumulativeprospect theory (CPT), modifies this expression in two ways: Itdraws the subjective values of payoffs from a value function v(.)rather than a standard utility function u(.); and it involves thenonlinear transformation of probabilities into decision weights,here denoted by (.). Thus, for CPT we have

S���

R N ��q1� � � p1��v� x2� � v� x1����

��q3�

� � p3��v� x3� � v� x2�. (2)

Under both EUT and CPT, it is as if an individual maps eachpayoff to some subjective utility/value, weights each of these by(some function of) its probability, and thereby arrives at an overallevaluation or “score” for each lottery. Both Expressions 1 and 2entail choosing whichever lottery is assigned the higher score.Because each lottery’s score is determined entirely by the interac-tion between the decision maker’s preferences and the character-istics of that particular lottery (i.e., each lottery’s score is inde-pendent of any other lotteries in the available choice set), suchmodels guarantee respect for transitivity. Moreover, the functionswhich map payoffs to subjective values and map probabilities todecision weights may be specified in ways which guarantee re-spect for monotonicity and first-order stochastic dominance.4

However, for any theory to perform well descriptively, itsstructure needs to correspond with the way participants perceivestimuli and act on those perceptions. If judgmental processes runcounter to some feature(s) of a theory, the observed data are liableto diverge systematically from the implications of that theory. It isa central proposition of this article that participants’ perceptionsand judgments are liable to operate in ways which run counter tothe assumptions underpinning most decision theories, includingEUT and CPT. In particular, there is much psychological evidencesuggesting that many people do not evaluate alternatives entirelyindependently of one another and purely on the basis of the“absolute” levels of their attributes, but that their judgments andchoices may also be influenced to some extent by “relative”considerations (see, e.g., Stewart, Chater, Stott, & Reimers, 2003).In the context of pairwise choices between lotteries, this may entail

3 The great majority of experiments involve nonnegative payoffs. Theframework can accommodate negative amounts (i.e., losses); but to avoidcomplicating the exposition unnecessarily, the initial focus is upon non-negative payoffs, and the issue of losses is addressed later.

4 The original form of prospect theory (Kahneman & Tversky, 1979)involved a method of transforming probabilities into decision weightswhich allowed violations of first-order stochastic dominance, an implica-tion to which some commentators were averse. Quiggin (1982) proposed amore complex rank-dependent method of transforming probabilities intodecision weights, which seemed to preserve the broad spirit of the originalwhile ensuring respect for first-order stochastic dominance. CPT uses aversion of this method.

p3 p 2 p 1

S

x3

x2

x1

R

x3

x2

x1

q3

q2

q1

Figure 1. The basic pairwise choice format.

903MODELING CHOICE AND VALUATION

individuals having their perceptions of both probabilities and pay-offs systematically affected by such considerations.

To help bridge from a conventional theory such as EUT to amodel such as PRAM, which allows for between-lottery relativeconsiderations, rearrange Expression 1 as follows:

S���

R N �q1 � p1�/�q3 � p3����

�u� x3� � u� x2�/�u� x2�

� u� x1�. (3)

A verbal interpretation of this is “S is judged preferable to/indifferent to/less preferable than R according to whether theperceived relative argument for S versus R on the probabilitydimension—that is, for EUT, (q1 � p1)/(q3 � p3)—is greaterthan/equal to/less than the perceived relative argument for Rversus S on the payoff dimension—in the case of EUT, [u(x3) �u(x2)]/[u(x2) � u(x1)].”

However, suppose we rewrite that expression in more generalterms as follows:

S���

R N ��bS, bR����

�� yR, yS�, (4)

where �(bS, bR) is some function representing the perceived rel-ative argument for S versus R on the probability dimension,whereas �(yR, yS) is a function giving the perceived relativeargument for R versus S on the payoff dimension.

Expression 4 is the key to the analysis in this article. Whatdistinguishes any particular decision theory from any other(s) iseither the assumptions it makes about �(bS, bR) or the assumptionsit makes about �(yR, yS), or possibly both.

For example, under EUT, bS is (q1 � p1), whereas bR is(q3 � p3), and the functional relationship between them is given by�(bS, bR) � bS/bR, that is, by the ratio of those two probabilitydifferences. On the payoff dimension under EUT, yR � [u(x3) �u(x2)], yS � [u(x2) � u(x1)], and �(yR, yS) is the ratio betweenthose two differences (i.e., yR/yS). EUT’s general decision rule canthus be written as

S���

R N bS/bR

���

yR/yS. (5)

CPT uses the ratio format as in Expression 5 but makes some-what different assumptions about the b’s and y’s. In the case ofEUT, each u(xi) value is determined independently of any otherpayoff and purely by the interaction between the nature of theparticular xi and a decision maker’s tastes as represented by his orher utility function u(.). The same is true for CPT, except that u(.)is replaced by v(.), where v(.) measures the subjective value ofeach payoff expressed as a gain or loss relative to some referencepoint. In the absence of any guidance about how reference pointsmay change from one decision to another, each v(xi) is alsodetermined independently of any other payoff or lottery and purelyon the basis of the interaction between the particular xi and thedecision maker’s tastes.5 In this respect, CPT is not materiallydifferent from EUT.

The key distinction between CPT and EUT relates to the waythe two models deal with the probability dimension. Under EUT,

each probability takes its face value, so that bS is (q1 � p1),whereas bR is (q3 � p3), whereas under CPT the probabilities aretransformed nonlinearly to give bS � [(q1) � (p1)] and bR �[(q3) � (p3)], allowing the ratio bS/bR to vary in ways that aredisallowed by EUT’s independence axiom and thereby permittingcertain systematic “violations” of independence.

Because all of the (.)’s in CPT are derived via an algorithmthat operates entirely within their respective lotteries on the basisof the rank of the payoff with which they are associated, CPTshares with EUT the implication that each lottery can be assignedan overall subjective value reflecting the interaction of that lot-tery’s characteristics with the decision maker’s tastes. This beingthe case, transitivity is entailed by both theories.

However, if either �(bS, bR) or �(yR, yS)—or both—were to bespecified in some way which allowed interactions between lotter-ies, systematic departures from transitivity could result. In partic-ular, if participants in experiments make comparisons between twoalternatives, and if such comparisons affect their evaluations ofprobabilities or payoffs or both, this is liable to entail patterns ofresponse that deviate systematically from those allowed by EUT orCPT or any other transitive model. The essential idea behindPRAM is that many respondents do make such comparisons andthat their evaluations are thereby affected in certain systematicways that are not compatible with EUT or CPT—or, indeed, anyother single model in the existing literature.

The strategy behind the rest of the article is as follows. Forexpositional ease, we start by considering probability and payoffdimensions separately, initially focusing just on the probabilitydimension. Thus, the next section discusses how we might modify�(bS, bR) to allow for between-lottery comparisons on the prob-ability dimension and identifies the possible implications for avariety of decision scenarios involving the same three payoffs.Section 3 then considers an analogous modification of �(yR, yS) toallow for between-lottery interactions on the payoff dimension.PRAM is no more than Expression 4 with both �(bS, bR) and �(yR,yS) specified in forms that allow for the possibility of suchbetween-lottery interactions. Section 4 then discusses how theparticular specifications proposed by PRAM relate to the ways inwhich a variety of other theories have modeled one or other or bothdimensions, before considering in Section 5 some recent datarelating to certain of PRAM’s distinctive implications.

First, we address the probability dimension.

2. Modeling Probability Judgments

2.1 The Common Ratio Effect

We start with one of the most widely replicated of all experi-mental regularities: the form of Allais paradox that has come to beknown as the common ratio effect (CRE; see Allais, 1953; Kah-neman & Tversky, 1979).

Consider the two pairwise choices shown in Figure 2.In terms of the template in Figure 1, x3 � 40, x2 � 30 and

x1 � 0. In Choice 1, p2 � 1 (so that p3 � p1 � 0), q3 � 0.8, q2 �

5 A recent variant of CPT (see Schmidt, Starmer, & Sugden, 2008)shows how certain changes in reference point may help explain a particularform of preference reversal which cannot otherwise be reconciled withCPT.

904 LOOMES

0, and q1 � 0.2. Substituting these values into Expression 3, theimplication of EUT is that

S1

���

R1 N 0.2/0.8���

�u�40� � u�30�/�u�30� � u�0�.

(6)

Choice 2 can be derived from Choice 1 by scaling down theprobabilities of x3 and x2 by the same factor—in this example, bya quarter—and increasing the probabilities of x1 accordingly.Applying EUT as above gives

S2

���

R2 N 0.05/0.2���

�u�40� � u�30�/�u�30� � u�0�.

(7)

The expression for the relative weight of argument for R versusS on the payoff dimension is the same for both Expressions 6 and7: [u(40) � u(30)]/[u(30) � u(0)]. Meanwhile, the expression forthe relative weight of argument for S versus R on the probabilitydimension changes from 0.2/0.8 in Expression 6 to 0.05/0.2 inExpression 7. Because these two ratios are equal, the implicationof EUT is that the balance of relative arguments is exactly thesame for both choices: An EU maximizer should either pick S inboth choices or pick R on both occasions.

However, very many experiments using CRE pairs like those inFigure 2 find otherwise: Many individuals violate EUT by choos-ing S1 in Choice 1 and R2 in Choice 2, whereas the oppositedeparture—choosing R1 and S2—is observed relatively rarely.CPT can accommodate this asymmetry. To see how, consider theCPT versions of Expressions 6 and 7:

S1

���

R1 N �1 � �0.8�/�0.8����

�v�40�

� v�30�/�v�30� � v�0�, (8)

S2

���

R2 N ��0.25� � �0.2�/�0.2����

�v�40�

� v�30�/�v�30� � v�0�. (9)

As with EUT, the relative argument on the payoff dimension(the right side of each expression) is the same for both Expressions8 and 9. But the nonlinear transformation of probabilities meansthat the relative strength of the argument for S versus R on theprobability dimension decreases as we move from Expressions 8 to9. Using the parameters estimated in Tversky and Kahneman(1992), [1 � (0.8)]/(0.8) 0.65 in Expression 8 and[(0.25) � (0.2)]/(0.2) 0.12 in Expression 9. So any indi-vidual for whom [v(40) � v(30)]/[v(30) � v(0)] is less than 0.65but greater than 0.12 will choose S1 in Choice 1 and R2 in Choice2, thereby exhibiting the “usual” form of CRE violation of EUT.Thus, this pattern of response is entirely compatible with CPT.

However, there may be other ways of explaining that pattern.This article proposes an alternative account which gives much thesame result in this scenario but which has quite different implica-tions from CPT for some other cases.

To help set up the intuition behind this model, we start withRubinstein’s (1988) idea that some notion of similarity mightexplain the CRE, as follows.6 In Choice 1, the two lotteries differsubstantially on both the probability and the payoff dimensions;and although the expected value of 32 offered by R1 is higher thanthe certainty of 30 offered by S1, the majority of respondentschoose S1, a result which Rubinstein ascribed to risk aversionoperating in such cases. However, the effect of scaling down theprobabilities of the positive payoffs in Choice 2 may be to causemany respondents to consider those scaled-down probabilities tobe so similar that they pay less attention to them and give decisiveweight instead to the dimension which remains dissimilar—namely, the payoff dimension, which favors R2 over S2.

Such a similarity notion can be deployed to explain a number ofother regularities besides the CRE (see, e.g., Leland, 1994, 1998).However, a limitation of this formulation of similarity is thedichotomous nature of the judgment: That is, above some (not veryclearly specified) threshold, two stimuli are considered dissimilarand are processed as under EUT; but below that threshold, theybecome so similar that the difference between them is then re-garded as inconsequential.

Nevertheless, the similarity notion entails two important in-sights: first, that the individual is liable to make between-lotterycomparisons of probabilities and, second, that although the objec-tive ratio of the relevant probabilities remains the same as both arescaled down, the smaller difference between them in Choice 2affects the perception of that ratio in a way which reduces therelative strength of the argument favoring the safer alternative. Themodel in this article incorporates those two ideas in a way that notonly accommodates the CRE but also generates a number of newimplications.

In Choice 1, the probabilities are as scaled up as it is possible forthem to be: That is, bS � bR � 1. In this choice, the bS/bR ratio is0.2/0.8, and for many respondents—in most CRE experiments,typically a considerable majority—this relative probability argu-ment for S1 outweighs the relative payoff argument for R1. InChoice 2, p2 and q3 are scaled down to a quarter of their Choice 1values—as reflected by the fact that here bS � bR � 0.25. Withboth p2 and q3 scaled down to the same extent, the objective value

6 Tversky (1969) used a notion of similarity to account for violations oftransitivity: These are discussed in Section 3.

Choice #1 1

S1 03

R1 0 04 2.0 8.0

Choice #2 57.0 52.0

S2 0 03

R2 0 04 8.0 2.0

Figure 2. An example of a pair of “common ratio effect” choices.

905MODELING CHOICE AND VALUATION

of bS/bR remains constant; but the perceived force of the relativeargument on the probability dimension is reduced, so that manyrespondents switch to the riskier option, choosing R2 over S2. Tocapture this, we need to specify �(bS, bR) as a function of bS/bR

such that �(bS, bR) falls as bS � bR falls while bS/bR remainsconstant at some ratio less than 1. There may be various functionalforms that meet these requirements, but a straightforward one is

��bS, bR� � �bS/bR��bS � bR��, (10)

where � is a person-specific parameter whose value may vary fromone individual to another, as discussed later.

To repeat a point made at the beginning of Section 1, it is notbeing claimed that individuals consciously calculate the modifiedratio according to Expression 10 any more than proponents of CPTclaim that individuals actually set about calculating decisionweights according to the somewhat complex rank-dependent algo-rithm in that model. What the CPT algorithm is intended to captureis the idea of some probabilities being underweighted and othersbeing overweighted when individual lotteries are being evaluated,with this underweighting and overweighting tending to be system-atically associated with payoffs according to their rank within thelottery. Likewise, what the formulation in Expression 10 aims tocapture is the idea that differences interact with ratios in a waywhich is consistent with perceptions of the relative force of a ratiobeing influenced by between-lottery considerations.

The idea that � is a person-specific variable is intended to allowfor different individuals having different perceptual propensities.Notice that when � � 0, (bS � bR)� � 1, so that �(bS, bR) reducesto bS/bR: That is, the perceived relative argument coincides withthe objective ratio at every level of scaling down. On this reading,someone for whom � � 0 is someone who takes probabilities andtheir ratios exactly as they are, just as EUT supposes. However,anyone for whom � takes a value other than 0 is liable to have theirjudgment of ratios influenced to some extent by the degree ofsimilarity. In particular, setting � � 0 means that (bS � bR)�

increases as (bS � bR) falls. So whenever bS/bR � 1—which is thecase in the example in Figure 2 and in the great majority of CREexperiments—the effect of scaling probabilities down and reduc-ing (bS � bR) is to progressively reduce �(bS, bR), which is whatis required to accommodate someone choosing S1 in Choice 1 andR2 in Choice 2, which is the predominant violation of indepen-dence observed in standard CRE experiments. The opposite vio-lation—choosing R1 and S2—requires � � 0. Thus, one way ofaccounting for the widely replicated result whereby the greatmajority of deviations from EUT are in the form of S1 and R2 buta minority take the form of R1 and S2 is to suppose that differentindividuals are characterized by different values of �, with themajority processing probabilities on the basis of � � 0, whereas aminority behave as if � � 0.7

Notice also that when bS � bR � 1 (which means that proba-bilities of x3 and x2 are scaled up to their maximum extent), allindividuals (whatever their �) perceive the ratio as it objectively is.This should not be taken too literally. The intention is not to insistthat there is no divergence between perceived and objective ratioswhen the decision problem is as scaled up as it can be. At thispoint, for at least some people, there might even be some diver-gence in the opposite direction.8 However, it is analytically con-venient to normalize the �(bS, bR) values on the basis that whenbS � bR � 1, the perceived relative argument for S versus R takes

the objective ratio as its baseline value. On this basis, together withthe assumption that the (great) majority of participants in experi-ments behave as if � � 0, PRAM accommodates the standard CREwhere violations of independence are frequent and where the S1

and R2 combination is observed much more often than R1 and S2.However, although PRAM and CPT have much the same im-

plications for pairs of choices like those in Figure 2, there are othercommon ratio scenarios for which they make opposing predictions.To see this, consider a scaled-up Choice 3, which involves S3

offering 25 for sure, written (25, 1), and R3 offering a 0.2 chanceof 100 and a 0.8 chance of 0, written (100, 0.2; 0, 0.8). Scalingdown q3 and p2 by a quarter produces Choice 4 with S4 � (25,0.25; 0, 0.75) and R4 � (100, 0.05; 0, 0.95).

Under CPT, the counterpart of �(bS, bR) is [1 � (0.2)]/(0.2)in Choice 3, whereas in Choice 4, it is [(0.25) � (0.05)]/(0.05). Using the transformation function from Tversky andKahneman (1992), the value of [1 � (0.2)]/(0.2) in Choice 3 isapproximately 2.85, whereas the value of [(0.25) � (0.05)]/(0.05) in Choice 4 is roughly 1.23. So individuals for whom[v(x3) � v(x2)]/[v(x2) � v(x1)] lies between those two figures areliable to choose S3 in Choice 3 and R4 in Choice 4, therebyentailing much the same form of departure from EUT as inChoices 1 and 2.

However, in this case PRAM has the opposite implication. Inthe maximally scaled-up Choice 3, �(bS, bR) � bS/bR � 0.8/0.2 �4. In Choice 4, the same bS/bR ratio is raised to the power of (bS �bR)�, where bS � bR � 0.25 and where, for the majority ofindividuals, � � 0, so that reducing bS � bR increases the expo-nent on bS/bR above 1. So in scenarios such as Choices 3 and 4,where bS/bR � 1, the effect of scaling down the probabilities is togive relatively more weight to bS and relatively less to bR, therebyincreasing �(bS, bR). This allows the possibility that any memberof the majority for whom � � 0 may choose R3 and S4, whereasonly those in the minority for whom � � 0 are liable to choose S3

and R4. The intuition here is that, under these circumstances wherebR is smaller than bS, it is bR that becomes progressively moreinconsequential as it tends toward zero. This is in contrast with theassumption made by CPT, where the probability transformationfunction entails that low probabilities associated with high payoffswill generally be substantially overweighted.

This suggests a straightforward test to discriminate betweenCPT and PRAM: Namely, we can present experimental partici-pants with scenarios involving choices like 3 and 4, which havebS/bR � 1, as well as giving them choices like 1 and 2, wherebS/bR � 1. Indeed, one might have supposed that such tests

7 Of course, there will be some—possibly many—individuals for whom� may be nonzero but may be close enough to zero that, on manyoccasions, no switch between S and R is observed unless the balance ofarguments in Choice 1 is fairly finely balanced. Moreover, in a worldwhere preferences are not purely deterministic and where responses are tosome extent noisy, some switching—in both directions—may occur as aresult of such noise. However, as stated earlier, this article is focusing onthe deterministic component.

8 In the original version of prospect theory, Kahneman and Tversky(1979) proposed that p2 � 1 might involve an extra element—the certaintyeffect—reflecting the idea that certainty might be especially attractive; butCPT does not require any special extra weight to be attached to certaintyand weights it as 1.

906 LOOMES

have already been conducted. But in fact, common ratio scenarioswhere bS/bR � 1 are thin on the ground. Such limited evidence asthere is gives tentative encouragement to the PRAM prediction:For example, Battalio, Kagel, and Jiranyakul (1990) report a studywhere their Set 2 (in their Table 7) involved choices where (x3 �x2) � $14 and (x2 � x1) � $6 with bS/bR � 2.33. Scaling downby one fifth resulted in 16 departures from EUT (out of a sampleof 33), with 10 of those switching from R in the scaled-up pair toS in the scaled-down pair (in keeping with PRAM), whereas onlysix exhibited the “usual” common ratio pattern. Another instancecan be found in Bateman, Day, Loomes, and Sugden (2006). Intheir Experiment 3, 100 participants were presented with twoseries of choices involving different sets of payoffs. In each setthere were CRE questions where bS/bR was 0.25, and in both sets,a clear pattern of the usual kind was observed: The ratio of S1 andR2: R1 and S2 was 37:16 in Set 1 and 29:5 in Set 2. In each set,there were also CRE questions where bS/bR was 1.5, and in thesecases, the same participants generated S1 and R2: R1 and S2 ratiosof 13:21 in Set 1 and 10:16 in Set 2—that is, asymmetries, albeitmodest, in the opposite direction to the standard CRE.

However, although the existing evidence in this respect is sug-gestive, it is arguably too sparse to be conclusive. The same is trueof a number of other respects in which PRAM diverges from CPTand other extant models. The remainder of this section thereforeidentifies a set of such divergent implications within an analyticalframework that underpins the experimental investigation that isreported in Section 5.

2.2 Other Effects Within the Marschak-MachinaTriangle

When considering the implications of different decision theoriesfor the kinds of choices that fit the Figure 1 template, many authorshave found it helpful to represent such choices visually by using aMarschak-Machina (M-M) triangle (see Machina, 1982), as shownin Figure 3. The vertical edge of this triangle shows the probabilityof the highest payoff, x3, and the horizontal edge shows theprobability of the lowest payoff, x1. Any residual probability is theprobability of the intermediate payoff, x2. The 14 lotteries labeledA through P (letter I omitted) represent different combinations ofthe same set of {x3, x2, x1}. So, for example, if those payoffs were,respectively, 40, 30, and 0, then F would offer the certainty of 30,whereas J would represent (40, 0.8; 0, 0.2): That is, F and J wouldbe, respectively, S1 and R1 from Choice 1 above. Likewise, N �(30, 0.25; 0, 0.75) is S2 in Choice 2, whereas P � (40, 0.2; 0, 0.8)is R2 in that choice.

An EU maximizer’s indifference curves in any triangle are allstraight lines with gradient [u(x2) � u(x1)]/[u(x3) � u(x2)], i.e., theinverse of yR/yS in the notation used above. So he or she will eitheralways prefer the more southwesterly of any pair on the same line(if bS/bR � yR/yS) or always prefer the more northeasterly of anysuch pair, with this applying to any pair of lotteries in the triangleconnected by a line with that same gradient.

CPT also entails each individual having a well-behaved indif-ference map (i.e., all indifference curves with a positive slope atevery point, no curves intersecting), but CPT allows these curvesto be nonlinear. Although the details of any particular map willvary with the degree of curvature of the value function and theweighting function,9 the usual configuration can be summarized

broadly as follows: Indifference curves fan out as if from some-where to the southwest of the right angle of the triangle, tending tobe convex in the more southeasterly region of the triangle but moreconcave to the northwest and particularly flat close to the bottomedge of the triangle and rather steeper near to the top of the verticaledge.

PRAM generates some implications which appear broadly com-patible with that CPT configuration, but there are other implica-tions which are quite different. To show this, Table 1 takes anumber of pairs from Figure 3 and lists them according to thevalue of �(bS, bR) that applies to each pair. The particular value ofeach �(bS, bR) depends on the value of � for the individual inquestion; but so long as � � 0, we can be sure that the pairs willbe ordered from highest to lowest �(bS, bR), as in Table 1. Thisallows us to say how any such individual will choose, dependingon where his or her �(yR, yS) stands in comparison with �(bS, bR).We do not yet need to know more precisely how �(yR, yS) isspecified by PRAM, except to know that it is a function of thethree payoffs and is the same for all choices involving just thosethree payoffs.10

So if an individual’s �(yR, yS) is lower than even the lowestvalue of �(bS, bR) in the table—that is, lower than 0.25(0.25)�

—theimplication is that �(bS, bR) � �(yR, yS) for all pairs in that table,meaning that in every case the safer alternative (the one listed firstin each pair) will be chosen. In such a case, the observed patternof choice will be indistinguishable from that of a risk-averse EUmaximizer.

9 In Tversky and Kahneman (1992), their Figure 3.4a shows an indif-ference map for the payoff set {x3 � 200, x2 � 100, x1 � 0} on theassumption that v(xi) � xi

0.88 and on the supposition that the weightingfunction estimated in that article is applied.

10 This requirement is met by EUT, where �(yR, yS) � [u(x3) � u(x2)]/[u(x2) � u(x1)], and by CPT, where �(yR, yS) � [v(x3) � v(x2)]/[v(x2) �v(x1)]. Although the functional form for �(yR, yS) proposed by PRAM isdifferent from these, it is seen in the next section that the PRAM specifi-cation of �(yR, yS) also gives a single value of that function for anyindividual facing any choices involving {x3, x2, x1}.

0

prob(x3)

prob(x1)

0.8

0.6

0.4

0.2

1

0.2 0.4 0.6 0.8 1

A

B

J

M

L P

NK

H

G

F

E

D

C

Figure 3. A Marschak-Machina triangle.

907MODELING CHOICE AND VALUATION

However, consider an individual for whom �(yR, yS) is higherthan the lowest value of �(bS, bR) but lower than the next value upon the list: i.e., the individual’s evaluation of the payoffs is suchthat �(yR, yS) is greater than 0.25(0.25)�

but less than 0.25(0.50)�

.Such an individual will choose the safer (first-named) alternativein all of the pairs in the top three rows of the table, but he or shewill choose the riskier (second-named) alternative in all of thepairs in the bottom row. This results in a number of patterns ofchoice which violate EUT; and although some of these are com-patible with CPT, others are not.

First, �(yR, yS) now lies in the range which produces the usualform of CRE—choosing F � (30, 1) over J � (40, 0.8; 0, 0.2) inthe top row of Table 1, but choosing P � (40, 0.2; 0, 0.8) over N �(30, 0.75; 0, 0.25) in the bottom row. In fact, a �(yR, yS) which liesanywhere between 0.25(0.25)�

and 0.25(1)�

will produce this pattern.As seen in the previous subsection, this form of CRE is compatiblewith both PRAM and CPT.

Second, this individual is now liable to violate betweenness.Betweenness is a corollary of linear indifference curves, whichmeans that any lottery that is some linear combination of two otherlotteries will be ordered between them. For example, consider F,G, and J in Figure 3. G � (40, 0.2; 30, 0.75; 0, 0.05) is a linearcombination of F and J: It is the reduced form of a two-stagelottery offering a 0.75 chance of F and a 0.25 chance of J. Withlinear indifference curves, as entailed by EUT, G cannot be pre-ferred to both F and J, nor can it be less preferred than both ofthem: Under EUT, if F � J, then F � G, and G � J; or if J �F, then J � G, and G � F. The same goes for any other linearcombination of F and J, such as H � (40, 0.6; 30, 0.25; 0, 0.15).But PRAM entails violations of betweenness. In this case, theindividual whose �(yR, yS) lies anywhere above 0.25(0.25)�

andbelow 0.25(0.75)�

will choose the safer lottery from every pair in thetop two rows of Table 1 but will choose the riskier lottery fromevery pair in the bottom row. Thus, he or she will (a) choose Gover both F and J (i.e., G over F in the bottom row and G over Jin the second row) and (b) choose both F and J over H (i.e., F overH in the second row and J over H in the bottom row).

All these choices between those various pairings of F, G, H andJ might be accommodated by CPT, although it would require theinteraction of v(.) and (.) to be such as to generate an S-shapedindifference curve in the relevant region of the triangle. However,to date, CPT has not been under much pressure to consider how toproduce such curves: As with common ratio scenarios wherebS/bR � 1, there is a paucity of experimental data looking forviolations of betweenness in the vicinity of the hypotenuse. Onenotable exception is a study by Bernasconi (1994), who looked atlotteries along something akin to the F–J line and found preciselythe pattern entailed by the PRAM analysis.

A third implication of PRAM relates to fanning out and fanningin. As noted earlier, CPT indifference maps are usually character-ized as generally fanning out across the whole triangle, tending tobe particularly flat close to the right end of the bottom edge andmuch steeper near the top of the vertical edge. However, steepindifference curves near to the top of the vertical edge would entailchoosing A over B, whereas PRAM suggests that any value of�(yR, yS) greater than 0.25(0.25)�

will cause B to be chosen over A.In conjunction with the choice of F over J, this would be more inkeeping with fanning out in more the southeasterly part of thetriangle but fanning in in the more northwesterly area. Again, thereis less evidence about choices in the northwest of the triangle thanin the southeast, but Camerer (1995) refers to some evidenceconsistent with fanning in toward that top corner, and in responseto this kind of evidence, some other non-EU models (e.g., Gul,1991) were developed to have this mixed fanning property.11

Thus far, however, it might seem that the implications of PRAMare not radically different from what might be implied by CPT andother non-EU variants, which, between them, could offer accountsof each of the regularities discussed above. However, as Bernas-coni (1994, p. 69) noted, it is difficult for any particular variant toaccommodate all of these patterns via the same nonlinear trans-formation of probabilities into decision weights.

However, there is a further implication of PRAM that doesrepresent a much more radical departure. Although the particularconfigurations may vary, what CPT and most of the other non-EUvariants have in common is that preferences over the lottery spacecan be represented by indifference maps of some kind. Thustransitivity is intrinsic to all of these models. But what Table 1allows us to see is that PRAM entails violations of transitivity.

As mentioned above, when an individual’s �(yR, yS) lies above0.25(0.25)�

and below 0.25(0.50)�

, the safer lotteries will be chosen inall pairs in the top three rows, but the riskier lotteries will bechosen in all pairs in the bottom row. Consider what this means forthe three pairwise choices involving C, D, and E. From the bottomrow, we see that E � D and D � C; but from the third row, wehave C � E, so that the three choices constitute a cycle. Becausethis involves three lotteries on the same line, with one being alinear combination of the other two, let this be called a between-ness cycle. It is easy to see from Table 1 that for any individualwhose �(yR, yS) lies above 0.25(0.25)�

and below 0.25(0.50)�

, PRAMentails another betweenness cycle: from the bottom row, M � Land L � K; but from the third row, K � M.

Nor are such cycles confined to that case and that range ofvalues for �(yR, yS). For example, if there are other individuals forwhom �(yR, yS) lies between 0.25(0.5)�

and 0.25(0.75)�

, PRAMentails the riskier lotteries being chosen from all of the pairs in thebottom two rows, whereas the safer option will be chosen in allcases in the top two rows. This allows, for example, H � G (fromthe third row) and G � F (from the bottom row) but F � H(from the second row).

Indeed, if PRAM is modeling perceptions appropriately, it iseasy to show that, for any triple of pairwise choices derived fromthree lotteries on the same straight line, there will always be some

11 However, a limitation of Gul’s (1991) disappointment aversion modelis that it entails linear indifference curves and therefore cannot accommo-date the failures of betweenness that are now well documented.

Table 1Values of �(BS, BR) for Different Pairs of Lotteries FromFigure 3

Value of �(bS, bR) Pair

0.25(1)�F vs. J

0.25(0.75)�F vs. H, G vs. J

0.25(0.50)�C vs. E, G vs. H, K vs. M

0.25(0.25)�A vs. B, C vs. D, D vs. E, F vs. G, H vs. J,

K vs. L, L vs. M, N vs. P

908 LOOMES

range of �(yR, yS) that will produce a violation of transitivity in theform of a betweenness cycle.

To see this, set x3, x2, x1, and q3 such that a particular individualis indifferent between S � (x2, 1) and R � (x3, q3; x1, 1 � q3).Denoting bS/bR by b, indifference entails �(bS, bR) � �(yR, yS) �b(1)�

� b. Because the value of �(yR, yS) is determined by the setof the three payoffs, �(yR, yS) � b for all pairs of lotteries definedover this particular set {x3, x2, x1}.

Now construct any linear combination T � (S, �; R, 1 � �),where 0 � � � 1, and consider the pairwise choices {S, T} and {T,R}. Because T is on the straight line between S and R, bS/bT �bT/bR � b. Hence, �(bS, bT) � b(1��)�

, and �(bT, bR) � b(�)�

.With 0 � � � 1, this entails �(bS, bT), �(bT, bR) � b for all b �1; and �(bS, bT), �(bT, bR) � b for all b � 1. Because �(yR, yS) �b for all these pairings, the implication is either the triple R � T,T � S, but S � R when b � 1, or S � T, T � R, but R � Swhen b � 1. As they stand, with S � R, these are weak violationsof transitivity; but it is easy to see that by decreasing q3 veryslightly when b � 1 (so that S � R), or by increasing q3 enoughwhen b � 1 (to produce R � S), strict violations of transitivitywill result.

The implication of betweenness cycles is one which sets PRAMapart from EUT and all non-EU models that entail transitivity. Butis there any evidence of such cycles? Such evidence as there iscomes largely as a byproduct of experiments with other objectives,but there is at least some evidence. For example, Buschena andZilberman (1999) examined choices between mixtures on twochords within the M-M triangle and found a significant asymmet-ric pattern of cycles along one chord but not along the other chord.Bateman et al. (2006) also reported such asymmetries: These werestatistically significant in one area of the triangle and were in thepredicted direction, although not significantly so, in another area.

Finally, reanalysis of an earlier dataset turns out to yield someadditional evidence that supports this distinctive implication ofPRAM. Loomes and Sugden (1998) asked 92 respondents to makea large number of pairwise choices, in the course of which theyfaced six “betweenness triples” where b � 1: specifically, thoselotteries numbered {18, 19, 20}, {21, 22, 23}, {26, 27, 28}, {29,30, 31}, {34, 35, 36}, and {37, 38, 39} in the triangles labeledIII–VI, where b ranged from 0.67 to 0.25. Individuals can beclassified according to whether they (a) exhibited no betweennesscycles, (b) exhibited one or more cycles only in the directionconsistent with PRAM, (c) exhibited one or more cycles only inthe opposite direction to that implied by PRAM, or (d) exhibitedcycles in both directions. Thirty-five respondents never exhibiteda cycle, and 11 recorded at least one cycle in both directions.However, of those who cycled only in one direction or the other,38 cycled in the PRAM direction, as opposed to just eight whocycled only in the opposite direction. If both propensities to cyclewere equally likely to occur by chance, the probability of the ratio38:8 is less than 0.00001; and even if all 11 “mixed cyclers” werecounted strictly against the PRAM implication, the probability ofthe ratio 38:19 occurring by chance would still be less than 0.01.

So there is at least some support for PRAM’s novel implicationconcerning transitivity over lotteries within any given triangle.However, because this is a novel implication of PRAM for whichonly serendipitous evidence exists, the new experimental workdescribed in Section 5 was also intended to provide further evi-dence about this implication.

Thus far, then, we have seen that when bS/bR � 1, PRAMentails the opposite of the CRE pattern associated with scenarioswhere bS/bR � 1; and when bS/bR � 1, PRAM entails betweennesscycles in one direction, whereas when bS/bR � 1, the expecteddirection of cycling is reversed. These two implications are par-ticular manifestations of the more general point that moving frombS/bR � 1 to bS/bR � 1 has the effect of turning the whole orderingin Table 1 upside down. This broader implication is also addressedin the new experimental work.

There is a further implication, not tested afresh but relevant toexisting evidence. Consider what happens when the payoffs arechanged from gains to losses (represented by putting a minus signin front of each xi in Figure 2). The S lottery now involves a sureloss of 30—that is, S � (�30, 1)—whereas R � (�40, 0.8; 0,0.2). In this case, bS/bR � 4, so that the “reverse CRE” is entailedby PRAM. Although there is a dearth of evidence about scenarioswhere bS/bR � 1 in the domain of gains, there is a good deal moreevidence from the domain of losses, ever since Kahneman andTversky (1979) reported the reverse CRE in their Problems 3� &4�, and 7� and 8� and dubbed this mirror image result the “reflec-tion effect.” It is clear that PRAM also entails the reflection effect,not only in relation to CRE but more generally as a consequenceof inverting the value of bS/bR when positive payoffs are replacedby their mirror images in the domain of losses.

Finally, by way of drawing this section to a close, are there anywell-known regularities within the M-M triangle that PRAM doesnot explain? It would be remarkable if a single formula on theprobability dimension involving just one perception parameter �were able to capture absolutely every well-known regularity aswell as predicting several others. It would not be surprising ifhuman perceptions were susceptible to more than just one effect,and there may be other factors entering into similarity judgmentsbesides the one proposed here. For example, Buschena and Zilber-man (1999) suggested that when all pairs of lotteries are transforma-tions of some base pair, such as {F, J} in Figure 3, the distancesbetween alternatives in the M-M triangle would be primary indicatorsof similarity, which is essentially what the current formulation of�(bS, bR) proposes (to see this, compare the distances in Figure 3 withthe values of �[bS, bR] in Table 1). However, Buschena and Zilber-man modified this suggestion with the conjecture that if one alterna-tive, but not the other, involved certainty or quasicertainty, this mightcause the pair to be perceived as less similar, and if two alternativeshad different support, they would be regarded as more dissimilar.

An earlier version of PRAM (Loomes, 2006) proposed incor-porating a second parameter (�) into the functional form of �(bS,bR) with a view to capturing something of this sort, and therebydistinguishing between two pairs such as {F, G} and {N, P}, whichare equal distances apart on parallel lines. The effect of � was toallow F and G to be judged more dissimilar from each other thanN and P, because F and G involved a certainty being comparedwith a lottery involving all three payoffs, whereas N and P in-volved two payoffs each. On this basis, with bS/bR � 1, the modelallowed the combination of F � G with N � P but not theopposite. This particular regularity has been reported in the liter-ature: It is the form of Allais paradox that has come to be knownsince Kahneman and Tversky (1979) as the common consequenceeffect. This effect is compatible with CPT, but if PRAM is re-stricted to the “�-only” form of �(bS, bR), there is no such

909MODELING CHOICE AND VALUATION

distinction between {F, G} and {N, P}, so that this �-only form ofPRAM does not account for the common consequence effect.

So why is � not included in the present version? Its omission fromthe current version should not be interpreted as a denial of the possiblerole of other influences upon perceptions. On the contrary, as statedabove, it would be remarkable if every aspect of perception on theprobability dimension could be reduced to a single expression withjust one free parameter. But to focus on the explanatory powerprovided by that single formulation, and to leave open the question ofhow best to modify �(bS, bR) to allow for other effects on perception,there is an argument for putting the issue of a � into abeyance until wehave more information about patterns of response in scenarios whichhave to date been sparsely investigated. If the �-only model performswell but (as seems likely) is not by itself sufficient to provide a fulldescription of behavior, the data collected in the process of testingmay well give clues about the kinds of additional modifications thatmay be appropriate.

However, the more immediate concern is to extend the modelbeyond sets of decisions consisting of no more than three payoffsbetween them. To that end, the next section considers how per-ceptions might operate on the payoff dimension.

3. Modeling Payoff Judgments

As indicated in Expressions 6 and 8, the EUT and CPT ways ofmodeling “the relative argument for R compared with S on thepayoff dimension” are, respectively, [u(x3) � u(x2)]/[u(x2) �u(x1)] and [v(x3) � v(x2)]/[v(x2) � v(x1)]. That is, these models,like many others, map from the objective money amount to anindividual’s subjective value of that amount via a utility or valuefunction, and then suppose that the relative argument for onealternative against another can be encapsulated in terms of the ratioof the differences between these subjective values. So modelingpayoff judgments may be broken down into two components: thesubjective difference between any two payoffs and how pairs ofsuch differences are compared and perceived.

Consider first the conversion of payoffs into subjective values/utilities. It is widely accepted that—in the domain of gains atleast—v(.) and u(.) are concave functions of payoffs, reflectingdiminishing marginal utility and/or diminishing sensitivity. Cer-tainly, if we take the most neutral base case—S offering some surex2, whereas R offers a 50–50 chance of x3 or x1—it is widelybelieved that most people will choose S whenever x2 is equal to theexpected (money) value of R and, indeed, that many will choose Seven when x2 is somewhat less than that expected value—thisoften being interpreted as signifying risk aversion in the domain ofgains. In line with this, PRAM also supposes that payoffs map tosubjective values via a function c(.), which is (weakly) concave inthe domain of gains.12 To simplify notation, c(xi) is denoted by ci.

On that supposition, the basic building block of �(yR, yS) is(c3 � c2)/(c2 � c1), which is henceforth denoted by cR/cS. This isthe counterpart to bS/bR in the specification of �(bS, bR). So to putthe second component of the model in place, we apply the sameintuition about similarity to the payoff dimension as was applied toprobabilities and posit that the perceived ratio is liable to divergemore and more from the basic ratio, cR/cS, the more different cR

and cS become. Because the ci’s refer to payoffs rather thanprobabilities, there is no counterpart to bS � bR having an upper

limit of 1. So, as a first and very simple way of modeling percep-tions in an analogous way, let us specify �(yR, yS) as

�� yR, yS� � �cR/cS��, where � � 1. (11)

Under both EUT and CPT, � � 1—i.e., when c(.) � u(.) underEUT and when c(.) � v(.) under CPT. However, when � � 1,whichever is the bigger of cR and cS receives disproportionate atten-tion, and this disproportionality increases as cR and cS become moreand more different. So in cases where cR/cS � 1, doubling cR whileholding cS constant has the effect of more than doubling the perceivedforce of the relative argument favoring R. Equally, when cR/cS � 1,halving cR while holding cS constant weakens the perceived force ofthe argument for R to something less than half of what it was.

With �(yR, yS) specified in this way, a number of results can bederived. In so doing, the strategy will be to abstract initially fromany effect due to any nonlinearity of c(.) by examining first theimplications of setting ci � xi.

First, we can derive the so-called fourfold pattern of risk atti-tudes (Tversky & Kahneman, 1992) whereby individuals are saidto be risk seeking over low-probability high-win gambles, riskaverse over high-probability low-win gambles, risk-seeking overhigh-probability low-loss gambles and risk-averse over low-probability high-loss gambles.

This pattern is entailed by PRAM, even when c(.) is assumed tobe linear within and across gains and losses. To see this, start in thedomain of gains and consider an R lottery of the form (x3, q3; 0,1 � q3) with the expected value x2 ( � q3.x3). Fix S � (x2, 1) andconsider a series of choices with a range of R lotteries, varying thevalues of q3 and making the adjustments to x3 necessary to holdthe expected value constant at x2. Because all of these choicesinvolve bS � bR � 1, �(bS, bR) � (1 � q3)/q3. With ci � xi, wehave �(yR, yS) � [(x3 � x2)/x2]�. With x2 � q3.x3, this gives �(yR,yS) � [(1 � q3)/q3]�, which can be written �(yR, yS) � [�(bS,bR)]�. When q3 � 0.5, �(bS, bR) is less than 1 and so with � � 1,�(yR, yS) is even smaller: Hence, S is chosen in preference to R, anobservation that is conventionally taken to signify risk aversion.However, whenever q3 � 0.5, �(bS, bR) is greater than 1 and �(yR,yS) is bigger than �(bS, bR), so that now R is chosen over S, whichis conventionally taken to signify risk seeking. Thus, we have thefirst two elements of the fourfold attitude to risk: risk aversion overhigh-probability low-win gambles and risk seeking over low-probability high-win gambles in the domain of gains. It is easy to seethat if we locate R in the domain of losses, with q3 now being theprobability of 0 and with the expected value of R held constant atq1.x1 � x2, the other two elements of the fourfold pattern—riskaversion over low-probability high-loss gambles and risk seeking overhigh-probability low-loss gambles—are also entailed by PRAM.

12 Actually, the strict concavity of this function, although it probablycorresponds with the way most people would behave when presented with50/50 gambles, is not necessary to produce many of the results later in thissection, where a linear c(.) is sufficient. And because there are at least somecommentators who think that the degree of risk aversion seemingly exhib-ited in experiments is surprisingly high (see, e.g., Rabin, 2000), it maysometimes be useful (and simpler) to work on the basis of a linear c(.) andabstract from any concavity as a source of what may be interpreted asattitude to risk. The reason for using c(.) rather than u(.) or v(.) is to keepopen the possibilities of interpretations that may differ from those normallyassociated with u(.) or v(.).

910 LOOMES

The fact that these patterns can be obtained even when c(.) islinear breaks the usual association between risk attitude and thecurvature of the utility/value function and suggests that at least partof what is conventionally described as risk attitude might insteadbe attributable to the way that the perceived relativities on theprobability and payoff dimensions vary as the skewness of R isaltered. If c(.) were nonlinear—and, in particular, if it were every-where concave, as u(.) is often supposed to be—the above resultswould be modified somewhat: When q3 � 0.5 and x2/x3 � 0.5,(c3 � c2)/c2 � 0.5, so that S would be chosen over R for q3 � 0.5and might continue to be chosen for some range of q3 below 0.5,depending on the curvature of c(.) and the value of �. Nevertheless,it could still easily happen that below some point, there is a rangeof q3 where R is chosen. Likewise, continuing concavity into thedomain of losses is liable to move all of the relative argumentssomewhat in favor of S, but there may still be a range of high-probability low-loss R which are chosen over S. In short, and incontrast with CPT, PRAM does not use convexity in the domain oflosses to explain the fourfold pattern.

Still, even if they reach the result by different routes, PRAM andCPT share the fourfold pattern implication. However, there is a relatedregularity where they part company: namely, the preference-reversalphenomenon and the cycle that is its counterpart in pairwise choice.In the language of the preference-reversal phenomenon (see Lichten-stein & Slovic, 1971, and Seidl, 2002), a low-probability high-wingamble is a $-bet, whereas a high-probability low-win gamble is aP-bet. The widely replicated form of preference reversal occurs whenan individual places a higher certainty equivalent value on the $-betthan on the P-bet but picks the P-bet in a straight choice between thetwo. Denoting the bets by $ and P—and their certainty equivalents assure sums of money CE$ and CEP, such that CE$ � $ and CEP �P—the classic and frequently observed reversal occurs when CE$ �CEP but P � $. The opposite reversal—placing a higher certaintyequivalent on the P-bet but picking the $-bet in a straight choice—isrelatively rarely observed.

Let X be some sure amount of money, such that CE$ � X �CEP. Then the classic preference reversal translates into the choicecycle $ � X, X � P, P � $. However, this cycle and thepreference reversal phenomenon are both incompatible with CPTand other models which have transitivity built into their structure:If $ � X and X � P—which is what the fourfold pattern entailswhen X is the expected value of the two bets—then transitivityrequires $ � P in any choice between those two and requires thatthis ordering be reflected in their respective certainty equivalents.Any strong asymmetric pattern of cycles and/or any asymmetricdisparity between choice and valuation cannot be explained byCPT or any other transitive model.13

By contrast, PRAM entails both the common form of preferencereversal and the corresponding choice cycle. To see this, considerFigure 4.

In line with the parameters of most preference reversal experi-ments, let the probabilities be set such that 1 � q � 0.5 � �q �0. The case is simplest when both bets have x1 � 0 and the sameexpected value, X. To simplify the exposition still further andshow that the result does not require any nonlinearity of c(.), letci � xi. We have already seen from the discussion of the fourfoldpattern that under these conditions, when X is a sure sum equal tothe expected value of both bets, $ � X and X � P. For a cycle

to occur, PRAM must also allow P � $. To see what PRAMentails for this pair, we need to derive �(bP, b$) and �(y$, yP).

Because bP � �1 � ��q and b$ � �q,

��bP, b$� � ��1 � ��/���q��(12)

And because y$ � ��1 � �� X/�q and yP � X/q,

�� y$, yP� � ��1 � ��/���. (13)

Thus, the choice between P and $ depends on whether � isgreater than or less than 0.5 in conjunction with whether q� isgreater than or less than �. Because � and � are person-specificparameters, consider first an individual whose perceptions are suchthat q� � � � 1. In cases where � � 0.5, and therefore (1 � �)/� �1, such an individual will judge �(bP, b$) � �(y$, yP) and will pickthe $-bet, so that no cycle occurs. But where � � 0.5, that sameindividual will judge �(bP, b$) � �(y$, yP) and will pick the P-bet,thereby exhibiting the cycle $ � X, X � P, P � $. BecausePRAM supposes that valuations are generated within the sameframework and on the basis of the same person-specific parametersas choices, $ � X entails CE$ � X, and X � P entails X � CEP,so that such an individual will also exhibit the classic form ofpreference reversal, CE$ � CEP in conjunction with P � $.

Next consider an individual whose perceptions are such that � �q� � 1. For such an individual, � � 0.5 entails �(bP, b$) � �(y$,yP) so that he or she will pick the $-bet and no cycle will beobserved. But in cases where � � 0.5, he or she will judge �(bP,b$) � �(y$, yP) and will pick the P-bet, thereby exhibiting the cycle$ � X, X � P, P � $. So although this individual will exhibita cycle under different values of � than the first individual, theimplication is that any cycle he or she does exhibit will be in thesame direction: namely, the direction consistent with the classicform of preference reversal.

Thus, under the conditions in the domain of gains exemplified inFigure 4, PRAM entails cycles in the expected direction but not inthe opposite direction. On the other hand, if we reflect the lotteries

13 Kahneman and Tversky (1979) were very clear in stating that prospecttheory is strictly a theory of pairwise choice, and they did not apply it tovaluation (or other matching) tasks. In their 1992 exposition of CPT, theyrepeated this statement about the domain of the theory, and to the extentthat they used certainty equivalent data to estimate the parameters of theirvalue and weighting functions, they did so by inferring these data from aniterative choice procedure. Strictly speaking, therefore, CPT can be saidonly to have an implication for choices—and in this case, choice cycles(which it does not allow). Other rank-dependent models make no such cleardistinction between choice and valuation and therefore also entail thatvaluations should be ordered in the same way as choices.

λq (1-λ)q 1-q

$

X/λq

0

0

P

X/q

X/q

0

Figure 4. A {$, P} pair with expected value � X.

911MODELING CHOICE AND VALUATION

into the domain of losses by reversing the sign on each nonzeropayoff, the effect is to reverse all of the above implications: Nowthe model entails cycles in the opposite direction.

Besides the large body of preference reversal data (again, see Seidl,2002), there is also empirical evidence of this asymmetric patterns ofcycles (see, e.g., Tversky, Slovic, & Kahneman 1990, and Loomes,Starmer, & Sugden, 1991). In addition, the opposite asymmetry in thedomain of losses was reported in Loomes and Taylor (1992).

Those last two articles were motivated by a desire to test regrettheory (Bell, 1982; Fishburn, 1982; Loomes & Sugden, 1982, 1987),which has the same implications as PRAM for these parameters. Butthe implications of regret theory and PRAM diverge under differentparameters. To see this, scale all the probabilities of positive payoffs(including X, previously offered with probability 1) down by a factorp (and in the case of X, add a 1 � p probability of zero) to producethe three lotteries shown in Figure 5.

Because the payoffs have not changed, the values of �(., .) foreach pairwise choice are the same as for the scaled-up lotteries.However, scaling the probabilities down changes the �(., .) values.In the choice between $ and X, when p � 1, �(bX, b$) is��1 � �q�/�q��1��

, which reduces to [(1 � �q)/�q]; and with�q � 0.5, this is smaller than �(y$, yX) � [(1 � �q)/�q]� when � �1. However, as p is reduced, p� increases, and at the point whereit becomes larger than �, �(bX�, b$�) becomes greater than �(y$�,yX�), so that the individual now chooses X� over $�. Likewise,when p � 1, the scaled-up X was chosen over the scaled-up P; butas p is reduced, �(bX�, bP�) falls and becomes smaller than �(yP�,yX�) at the point where p� becomes larger than �. So once thispoint is reached, instead of $ � X and X � P, we have P� �X� and X� � $�.

Whether a reverse cycle is observed then depends on the choicebetween $� and P�. Modifying and combining Expressions 12 and13, we have

P����

S� N ��bP�, bS�� � ��1 � ��/���pq��

���

��1 � ��/��� � �� y$�, yP��, (14)

so that S� will be chosen in cases where (1 � �)/� � 1 and (pq)� ��. In such cases—and (1 � �)/� � 1 is typical of many preferencereversal experiments—the result will be the cycle P� � X�,X� � $�, $� � P�. The opposite cycle will not occur once p hasfallen sufficiently to produce p� � � (although, of course, the

value of p at which this occurs may vary greatly from one indi-vidual to another).

Such “similarity cycles” were reported by Tversky (1969) andwere replicated by Lindman and Lyons (1978) and Budescu andWeiss (1987). More recently, Bateman et al. (2006) reported suchcycles in two separate experiments with rather different payoffparameters than those used by Tversky. Those experiments hadbeen designed primarily to explore the CRE, and the data concern-ing cycles were an unintended byproduct. Even so, there were fourtriples that fitted the Figure 5 format, and in all four of these,similarity cycles outnumbered cycles in the opposite direction to ahighly significant extent.

Such an asymmetry is contrary to the implications of regrettheory.14 However, as shown earlier, PRAM not only entailssimilarity cycles in scaled-down choices but also entails the op-posite asymmetry in scaled-up choices: a predominance of whatmight be called “regret cycles.” Moreover, this novel and ratherstriking implication of the model turns out to have some empiricalsupport. Following the first two experiments reported in Batemanet al. (2006), a third experiment was conducted in which everypairwise combination of four scaled-up lotteries, together withevery pairwise combination of the corresponding four scaled-downlotteries, were presented in conjunction with two different sets ofpayoffs. All these choices were put to the same individuals in thesame sessions under the same experimental conditions. The resultsare reported in Day and Loomes (2009): There was a clear ten-dency for regret cycles to predominate when the lotteries werescaled up, whereas there was a strong asymmetry favoring simi-larity cycles among the scaled-down lotteries.

There is a variant upon this last result for which some relativelyrecent evidence has been reported. Look again at X� in Figure 5:It is, in effect, a P-bet. Likewise, P� from Figure 5 could beregarded as a $-bet. Finally, let us relabel $� in Figure 5 as Y, ayardstick lottery offering a higher payoff—call it x*—than eitherof the other two. Instead of asking respondents to state certaintyequivalents for P and $, we could ask them to state probabilityequivalents for each lottery—respectively, PEP and PE$—by set-ting the probabilities of x* that would make them indifferentbetween that lottery and the yardstick.15 If, for some predeter-mined probability (such as �pq in Figure 5), the individual exhibitsa similarity cycle Y � $, $ � P, P � Y, then the probabilityequivalence task requires setting the probability of x* at somethingless than �pq to establish PE$ � S, whereas it involves setting theprobability of x* at something greater than �pq to generate PEP �P. Thus, for valuations elicited in the form of probability equiva-lents, PRAM allows the possibility of PEP � PE$ in conjunctionwith $ � P. A recent study by Butler and Loomes (2007) reportedexactly this pattern: a substantial asymmetry in the direction ofclassic preference reversals when a sample of respondents gavecertainty equivalents for a particular {$, P} pair and the oppositeasymmetry when that same sample was asked to provide proba-bility equivalents for the very same {$, P} pair.

14 More is said about this in the discussion in Section 4.15 Such tasks are widely used in health care settings where index

numbers (under EUT, these are the equivalent of utilities) for health stateslying somewhere between full health and death are elicited by probabilityequivalent tasks, often referred to as “standard gambles.”

λpq (1-λ)pq p(1-q) 1-p

$′

X/λq

0

0

0

P′

X/q

X/q

0

0

X′

X

X

X

0

Figure 5. A {$�, P�, X�} triple, all with expected value � p.X.

912 LOOMES

However, the Butler and Loomes (2007) data involved only asingle {$, P} pair, leaving open the possibility that theirs couldhave been a one-off result peculiar to the parameters and theparticular experimental procedure used. A subsequent experimentreported in Loomes, Pinto-Prades, Abellan-Perpinan, andRodrı́guez-Miguez (2009) used six pairings of six different lotter-ies and a different elicitation procedure linked directly to incentivemechanisms. The same patterns—the classic asymmetry whencertainty equivalences were elicited and the opposite asymmetrywhen probability equivalences were elicited— emerged veryclearly, providing further strong evidence of this striking implica-tion of PRAM.

There are other implications of PRAM omitted for lack ofspace,16 but the discussion thus far is sufficient to show thatPRAM is not only fundamentally different from CPT and othernon-EU models that entail transitivity but also that it diverges fromone of the best known nontransitive models in the form of regrettheory. This may therefore be the moment to focus attention on theessential respects in which PRAM differs from those and othermodels and to consider in more detail the possible lessons not onlyfor those models but also for the broader enterprise of developingdecision theories and using experiments to try to test them.

4. Relationship With, and Implications for, OtherModels

The discussion so far has focused principally on the way thatPRAM compares with and diverges from EUT and from CPT (takento be the flagship of non-EU models), with some more limitedreference to other variants in the broad tradition of rational theories ofchoice. In the paragraphs immediately below, more is said about therelationship between PRAM and these models. However, as noted inthe introduction, PRAM is more in the tradition of psychological/behavioral models, and in the latter part of this section, there is adiscussion of the ways in which PRAM may be seen as building upon,but differentiated from, those models.

First, the most widely used decision model, EUT, is a special caseof PRAM where � � 0 and � � 1. This means that individuals areassumed to act as if all differences and ratios on both the probabilityand utility dimensions are perceived and processed exactly as theyare, save only for random errors. PRAM shows that once we allowinteractions which affect the judgments and perceptions of theseratios, many implications of EUT are liable to fail descriptively.

However, the ability of alternative models to accommodate suchfailures may also be limited by the extent to which they rule outsuch interactions. So CPT is liable to fail for two main reasons.First, although it replaces u(.) by v(.), it makes essentially the sameassumption in terms of a consequence carrying its assigned valueinto every scenario, with differences and ratios between thosevalues being processed independently and exactly as they are—that is, as if � � 1. So the kinds of choice cycles described aboveas regret and similarity cycles cannot be accounted for. Second,although CPT and other rank-dependent models allow probabili-ties to be transformed nonlinearly and can even assign the sameprobability a different weight depending on its rank within alottery and the magnitudes of the other probabilities in that samelottery, CPT disallows any between-lottery influences on thistransformation.17

Other models can achieve some CPT-like results by a differentwithin-lottery route: For example, disappointment theory (Bell,1985; Loomes & Sugden, 1986) keeps probabilities as they are butallows within-lottery interactions between payoffs in ways whichcan accommodate certain violations of independence. However,what rank-dependent models and disappointment theory have incommon is that they effectively assign scores to each lottery as awhole which that lottery carries with it into every choice andvaluation task. In short, by restricting such interactions to within-lottery comparisons and ruling out any between-lottery effects,these models cannot account for violations of transitivity.

By contrast, regret theory allows between-lottery comparisons,but only on the payoff dimension. Essentially, it modifies theutility of any one payoff on the basis of the other payoff(s) offeredby other lotteries under the same state of the world. In the 1987formulation of regret theory, the net advantage of one payoff overanother is represented by the �(., .) function, which is assumed tobe strictly convex, so that for all x3 � x2 � x1, �(x3, x1) � �(x3,x2) � �(x2, x1).18 This enables the model to accommodate regretcycles, classic preference reversals, and some violations of inde-pendence (although these latter require the additional assumptionof statistical independence between lotteries). However, regrettheory does not allow for any between-lottery interactions on theprobability dimension—in fact, it takes probabilities exactly asthey are—and therefore cannot account for violations of the sure-thing principle, nor similarity cycles, nor betweenness cycles un-der assumptions of statistical independence.19

16 In the earlier formulation of the model (Loomes, 2006), some indi-cation was given of the way in which the model could accommodate otherphenomena, such as Fishburn’s (1988) “strong” preference reversals. Ref-erence was also made to possible explanations of violations of the reduc-tion of compound lotteries axiom and of varying attitudes to ambiguity.Details are available from the author on request.

17 It is interesting to consider why CPT, a model that was initially inspiredby insights about psychology and psychophysics, should permit effects fromcomparisons within a lottery but—even though it is explicitly a theory ofpairwise choices—should disallow such effects between lotteries. The answermay be found in the evolution of the model. The original (Kahneman &Tversky, 1979) form of prospect theory made no such within-lottery compar-isons: Probabilities were simply converted via a nonlinear transformationfunction. But this had the result that the weights generally did not add up to 1,which allowed effects that were regarded as normatively undesirable or be-haviorally implausible and that had to be controlled by other means, such as anediting phase, to spot them and eliminate them. The rank-dependent procedurewas a later development, proposed as a way of closing such gaps and guar-anteeing respect for dominance and transitivity. But the latter goal is drivenmore by normative precepts than by psychological insight; and this “arrangedmarriage” between the various insights and goals may be seen as the reasonwhy CPT ends up in a “halfway house” position when viewed from the PRAMperspective.

18 Notice that the PRAM formulation is consistent with this. Taking thedifferences between the pairs of payoffs and putting them over any com-mon denominator Z � 0 to get a measure of the relative force of eachdifference, PRAM would imply the same inequality, that is, [(x3 � x1)/Z]� � [(x3 � x2)/Z]� � [(x2 � x1)/Z]� for all � � 1.

19 Regret theory can produce cycles over triples involving a set of justthree payoffs by manipulating the juxtaposition of those payoffs. Suchjuxtaposition effects (see, e.g., Loomes, 1988) can also be shown to beimplied by PRAM. Details can be obtained from the author on request.

913MODELING CHOICE AND VALUATION

Many non-EU models of the kind referred to above—and espe-cially those designed to appeal to an audience of economists—have been influenced by the desire to meet criteria of rationalityand/or generality and have therefore tried to minimize departuresfrom the baseline of EUT and to invoke alternative axioms orprinciples driven by normative considerations. However, if thereare between-lottery interactions operating on perceptions in theway modeled by PRAM, those axioms are bound to be trans-gressed. Thus, any such model will fail in one way or another toaccommodate the evidence and/or will need to invoke certainsupplementary assumptions or forms of special pleading to try tocope with those data.

Models from a more psychological/behavioral perspective maybe less encumbered by such rigidities. Nevertheless, as discussedbelow, when such models are viewed from a PRAM perspective,it turns out that they too have imposed certain assumptions whichlimit their capacity to account for the evidence, except by invokingspecial additional assumptions of their own.

For example, Shafir, Osherson, and Smith (1993) proposed anadvantage model (AM) which accommodates some departuresfrom EUT and has some insights in common with PRAM. How-ever, that model was concerned exclusively with choices betweenbinary lotteries and money or probability equivalences for suchlotteries. Thus, it does not address tasks where one or both lotterieshave more than two payoffs, which necessarily limits its scoperelative to PRAM: By its nature, it does not deal with any lotteriesin the interior of the M-M triangle and therefore cannot deal withviolations of betweenness or betweenness cycles. In addition, AMinvokes different parameters for gains and losses and calls on anadditional principle, denoted by (*) (see their p. 336) to allow eachof those parameters to vary further according to the nature of thetask. This is in contrast with PRAM, which applies the sameperson-specific parameters across the board to all choice andequivalence tasks.

To see why AM needs to invoke different parameters andprinciples for different situations, consider how that model handlesthe most basic choice problem. Adapting AM to the notation usedin the current article, the simplest choice involves a pair S � (x2,p2) and R � (x3, q3), where x3 � x2 and p2 � q3 and where theexpected money values are, respectively, EMVS � x2 � p2 andEMVR � x3 � q3. The AM choice rule is then

S���

R N EMVS� p2 � q3����

EMVR�� x3 � x2�/x3kG,

(15)

where kG is a weight representing the relative importance placedupon the payoff and probability advantages. In simple choices, theexpectation is that most people will place more weight on proba-bilities than payoffs, so including kG in the payoff part of theexpression suggests kG � 1. When simple choices involve lossesrather than gains, a different weight kL is used instead. Supple-mentary principle (*) invokes compatibility in equivalence tasks,so that the same person’s ks may be different for money equiva-lences than for straight choices and different again for probabilityequivalences. Whereas, as stated earlier, the present article doesnot deny that such additional considerations may come into play,PRAM does not require them to accommodate the evidence,

whereas the explanatory power of AM is greatly reduced withoutthem.

The reason why AM is relatively limited and why it thereforeneeds supplementary assumptions may be found by examining therestrictions on PRAM implicit in Expression 15. EMVS isweighted by the simple difference between probabilities. However,the interaction between difference and ratio, which is crucial to thePRAM modeling of the perceived relative argument favoring S, isabsent from Expression 15. So although AM can accommodate theusual common ratio effect when q3/p2 � 0.5, applying it to caseswhere q3/p2 is considerably less than 0.5 would entail an evenstronger fanning out pattern, whereas PRAM suggests that theusual effect is moderated or even reversed in such cases. Andwhereas AM can find a way of accommodating Tversky’s (1969)similarity cycles, it can only do so by invoking a value of kG

“somewhat outside the common range, which is compatible withthe fact that it [Tversky’s evidence] characterizes a preselected andtherefore somewhat atypical group of subjects” (Shafir et al., 1993,p. 351). However, the examples of similarity cycles reported inBateman et al. (2006) and Day and Loomes (2009) cannot beaccounted for.20 Meanwhile, explaining the typical form ofpreference reversal requires (*) to be invoked to allow a ratherdifferent kG to be used for valuation than for choice, becauseAM is not generally compatible with the kinds of choice cyclesthat mimic the preference reversal phenomenon. This limitationrelative to PRAM appears to stem from the fact that the [(x3 �x2)/x3] term on the right hand side of Expression 15 does notactually use the ratio of relative advantages (which wouldrequire the denominator to be x2) and does not allow for theperceptual effects represented in PRAM by raising the ratio tothe power �. In the absence of modeling that effect, AM aims tocompensate with a combination of (*) and kG.

The contrast-weighting model proposed by Mellers and Biagini(1994), with its emphasis on the role of similarity, is closer in bothspirit and structure to PRAM. The key idea is that similaritybetween alternatives along one dimension/attribute tends to mag-nify the weight given to differences along the other dimension(s).The model is framed in terms of strength of preference for oneoption over another. Applied to a pair of lotteries where S � (x2,p2; 0, 1 � p2) and R � (x3, q3; 0, 1 � q3) and where x3 � x2 andp2 � q3, the judged strength of preference for S over R is given byu(x2)�(p).(p2)�(x) � u(x3)�(p).(q3)�(x), where u(.) gives the util-ities of the payoffs and (.) represents the subjective probabilitiesof receiving those payoffs, whereas �(p) and �(x) are, respec-tively, the contrast weights applied as exponents to those indices.

20 For example, in Experiment 2 described in Bateman et al. (2006),participants chose between pairs of lotteries {P, R}, {R, S}, and {P, S},where the three lotteries were P � (£25, 0.15), R � (£15, 0.20), and S �(£12, 0.25). Out of 21 participants (from a total of 149 in the sample) whoexhibited choice cycles, 20 were in the “Tversky” direction (i.e., P � R,R � S, but S � P). However, there is no value of kG compatible with thiscycle. In particular, R � S requires kG � 0.25, whereas S � P requireskG � 0.154. So although there may be some cycles compatible with AM,there are also some very strong asymmetric patterns which the model doesnot readily accommodate.

914 LOOMES

To make the comparison with PRAM easier to see, let ussuppose that choice between S and R maps to strength of prefer-ence in an intuitive way, so that S is chosen when strength ofpreference for S over R is positive and R is chosen when thatstrength of preference is negative. On that basis, and with somestraightforward rearrangement, we have

S���

R N �� p2�

�q3�� �� x� �

��

�u�x3�

u�x2�� �� p�

, (16)

which might be read as saying that the choice between S and Rdepends on whether the strength of preference favoring S on theprobability dimension is greater than, equal to, or less than thestrength of preference favoring R on the payoff dimension. Put intothis form, it is easier to identify the difference between PRAM andthis contrast weighting (CW) model. PRAM expresses the basicratio of arguments within each dimension in a form which can takevalues less than or greater than 1 (depending on the relativemagnitudes of advantage within a dimension) and then expressesthe perception of each ratio as a continuous nonlinear functionwhich reflects interactions between ratios and differences. The CWmodel proposed in Mellers and Biagini (1994) takes its exponents�(p) and �(x) as depending just on the absolute differences be-tween p2 and q3 and between x3 and x2 and as taking one of justtwo values: one when differences are small and the two indices inquestion are judged similar and another when differences are largeand the two indices are judged dissimilar. In this respect, the CWmodel has much in common with the similarity analysis suggestedby Rubinstein (1988) and Leland (1994, 1998), using a dichoto-mous similar/dissimilar judgment. However, it was precisely toovercome the limitations of such a formulation and to allow manymore diverse applications that PRAM was developed. Mellers andBiagini noted on p. 507 that “a more general representation wouldallow weights that are a continuous function of the absolutedifference along a dimension,” but they do not themselves providesuch a representation. PRAM might be seen as developing theCW/similarity insights broadly in the direction which Mellers andBiagini considered would be useful.

A somewhat different line of development was pursued byGonzalez-Vallejo (2002). The primary focus of that article was toembed a deterministic similarity core in a stochastic framework.Using the terminology from that article, the deterministic differ-ence between two alternatives is denoted by d, and the decisionmaker chooses the option with the deterministic advantage if andonly if d � � � ε, where � is a personal decision threshold and εis a value representing noise/random disturbance, drawn from adistribution with zero mean and variance �2.

For the pair of basic lotteries S � (x2, p2; 0, 1 � p2) and R �(x3, q3; 0, 1 � q3), where x3 � x2 and p2 � q3, Gonzalez-Vallejo’s(2002) Equation 3 gives the deterministic term as d � [(p2 �q3)/p2] � [(x3 � x2)/x3]. In this formulation, S is preferredto/indifferent to/less preferred than R according to whether theproportional advantage of S over R on the probability dimension isgreater than, equal to, or less than the proportional advantage of Rover S on the payoff dimension, with, in both cases, this proportionbeing the difference expressed as a fraction of the higher value.Because of the centrality of the difference between these propor-tions, Gonzalez-Vallejo calls this the proportional difference (PD)model.

Notice that, as expressed here, PD effectively says that thedeterministic component amounts to a preference for the alterna-tive with the higher expected money value. If the money values ofthe payoffs were replaced by their von Neumann-Morgensternutilities, the deterministic component would amount to a corepreference for the alternative with the higher EU; and if payoffswere mapped via v(.) to a value function and probabilities wereconverted to decision weights in the manner proposed by rank-dependent models, the core preference would correspond withCPT or some other rank-dependent variant. So departures fromexpected value/expected utility/subjective expected value maximi-zation models are accounted for by PD in terms of the way that anindividual’s decision threshold � departs from 0.

In this respect, � plays a role not unlike that played by kG inShafir et al. (1993). And as with AM, the only way the PD modelcan accommodate a wide variety of different regularities is byallowing � to vary from one regularity to another. A particularproblem caused by the proportionality at the core of this model isthat scaling down p2 and q3 by the same factor leaves d unchanged,so that the usual CRE would require � to change systematicallyaccording to the scaling of the probabilities. That would also berequired to allow both similarity cycles and regret cycles to beaccommodated.21 Likewise, the “fourfold attitude to risk” patternswould require not only the size but also the sign of � to changefrom one choice to the next: Choosing a small-probability high-payoff lottery over a sure sum with the same EMV (i.e., where d �0) requires a � that favors the payoff proportion, whereas choosingthe same sure sum over a large-probability moderate-payoff lotterywith the same EMV (so that d is still 0) requires a � of the oppositesign. In short, to accommodate a wide variety of different depar-tures from EV/EU maximization, we need PD to specify how �varies from one set of tasks and parameters to another. Gonzalez-Vallejo (2002) did not provide such a theory. Arguably, PRAMmakes such a theory unnecessary, because it accounts for thediverse effects within the same core specification.

The other issue addressed by Gonzalez-Vallejo (2002) and in adifferent way by Mellers and Biagini (1994) is the stochasticnature of actual choice behavior. Although Gonzalez-Vallejo’sapproach to this was to use a standard Fechnerian error term, thatis not the only way of incorporating a stochastic element intochoice behavior: As discussed by Loomes and Sugden (1995), a“random preference” specification, in the spirit of Becker, De-Groot, and Marschak (1963), may be an alternative route to take.However, a comprehensive discussion of the strengths and weak-nesses of different variants of error specification, as well as theissues raised for fitting models and testing hypotheses, couldconstitute a whole new article, and is beyond the scope of the

21 In addition, explaining the choice cycles analogous to classic prefer-ence reversals (if it could be done at all) would typically require a positivesign on d (because the riskier lotteries generally have higher EVs), whereasthe d needed to explain Tversky’s similarity cycle (Gonzalez-Vallejo,2002, p. 143) was negative (with EVs falling as the lotteries becameriskier).

915MODELING CHOICE AND VALUATION

present enterprise.22 Suffice it to say that PRAM could be adaptedto either approach, but the incorporation of a stochastic element byallowing any individual’s behavior, as well as any sample’s be-havior, to be modeled in terms of some distribution over both �and � would appear to be a route that could be profitably investi-gated in future research. Meanwhile, taking a deterministic form ofPRAM as reflecting some central tendency values of � and � issufficient for the purposes of the present article.

More recently still, Brandstatter et al. (2006) proposed a priorityheuristic (PH) model to explain a number of regularities. Whereasmost of the models discussed above say little or nothing about theorder in which people process a choice or valuation task, PHsuggests a sequence of comparisons of features of a problem withstopping and decision rules at each stage. On this basis, the PHmodel can accommodate a number of the well-known regularitiesin choice. But this model turns out to be poor at dealing with somepatterns that seem easy to predict just by looking at them, and PHoffers no guidance about equivalence judgments.

The essence of the problem here is encapsulated in the secondpart of the title of the article: “making choices without trade-offs.”A rule is either satisfied or it is not, and this dichotomous structureof the model causes it to neglect more holistic considerations,which can then only be dealt with by invoking another heuristic.As the authors acknowledged (Brandstatter et al., 2006, p. 425–426), PH’s predictive power is poor in cases involving largediscrepancies between expected values working in the oppositedirection to the PH sequence of rules.23 This reflects the model’slack of a trade-off mechanism that would allow such expectedvalue differentials to play a suitably weighted role. In the absenceof such a mechanism, PH also offers no obvious way of handlingequivalence tasks, despite the fact that participants seem perfectlyable to make such judgments. Although this issue is not addressedby Brandstatter et al., one supposes that equivalences would re-quire a further set of rules. It would be interesting to see what sucha set would entail; how it would relate to the choice rules; and howwell it would be able to accommodate the conjunctions betweenthe certainty equivalents, probability equivalents, and choices dis-cussed above. PRAM requires no such additional set(s) of rules/principles: The appropriate trade-offs are intrinsic to the model,and the same two free parameters can be applied equally well tothe various equivalences as to pairwise choices.

5. New Experimental Evidence

It will have become apparent, in Sections 2 and 3 in particular,that although there is some evidence consistent with various of themore striking and distinctive implications of PRAM, that evidenceis somewhat scattered and happenstantial. Moreover, because suchevidence as there is was mostly in existence before PRAM wasformulated24 and could arguably have been influential in shapingPRAM, it does not constitute a proper test of the model. So inFebruary 2009, an experiment was conducted to investigate certainimplications of PRAM more directly.

5.1 Design

That experiment revolved around the 14 pairs of lotteries listedin Table 1 and illustrated in Figure 3 above, for which the bS/bR

ratio is 0.25, together with the 14 pairs obtained by rotating

Figure 3 around the 45° line passing through F, as illustrated on theright side of Figure 6. All of the latter 14 pairs therefore involve abS/bR ratio of 4.0. So a comparison of responses to these two setsof 14 pairs will shed light on the adequacy of the implicationderived in Section 2 that inverting the bS/bR ratio will, in effect,turn Table 1 upside down.

Thus, if the �-only form of PRAM is sufficient, we shouldexpect to see the safer lotteries chosen with decreasing frequency(and therefore the riskier alternatives being chosen with increasingfrequencies) as the distances between alternatives are scaled downin the triangle on the left-hand triangle (LHT) in Figure 6; whereasfor those pairs in the right-hand triangle (RHT), we should expectthe safer lotteries to be chosen with increasing frequency as thedistances between alternatives are scaled down.

Besides this upside-down implication of inverting the bS/bR

ratio, we can also check the implication of PRAM which distin-guishes it from the class of all models entailing transitivity: Foreach subsample, we should expect any betweenness cycles to bemore frequent in the R � T, T � S, S � R direction in the LHTbut more frequent in the S � T, T � R, R � S direction in theRHT.

Further pairs were constructed to test the implication set out inSection 3 that, when we vary payoffs as well as probabilities, wemay expect cycles among scaled-up pairs to be more frequently inthe direction of regret cycles, whereas the corresponding scaled-down pairs should be more likely to exhibit an asymmetry in theopposite direction involving a preponderance of similarity cy-cles.25

A 2 � 2 � 2 design was used, creating 8 series of choicesinvolving (a) two sets of payoffs (but in both cases keeping theprobability distributions of the lotteries exactly as in Figure 6), (b)two opposite orders of presentation of the choices, and (c) revers-ing which lottery was displayed in the upper position and whichwas displayed in the lower position. A total of 134 respondentstook part, allocated at random between the various subsamples.The incentive for them to give honest and careful answers was thatthey knew that after they had made all of their decisions, onequestion would be picked at random (independently and transpar-

22 Loomes (2005) discussed the differences between various kinds oferror models and showed how the appropriate null and alternative hypoth-eses may be quite different, depending on the error model used.

23 An example given on p. 425 of Brandstatter et al. (2006) involves achoice between A � (88, 0.74) and B � (19, 0.86), where PH predictschoosing B but where the majority of the sample actually picked A, whoseexpected value is four times that offered by B. Similar failures wereapparent in a number of other pairs.

24 It was mostly in existence but not entirely: The experiment describedin Loomes et al. (2009) had not been conducted at the time the earlierversion of this article was submitted for consideration, and in that sense,the data from that experiment can be regarded as an independent test of theparticular implications set out toward the end of Section 3.

25 This is not such a novel feature of the design, because, as noted inSection 3, some evidence of this asymmetry has already been reported inDay and Loomes (2009). Still, it is a distinctive implication of the PRAMmodel and it seemed worthwhile to take the opportunity to check on therobustness of the Day and Loomes results for different sets of payoffs thanthose used by them.

916 LOOMES

ently for each respondent), and they would be paid on the spot andin cash according to how their choice in that question played out.

5.2 Results

The aggregate data are summarized in Table 2.26 For eachtriangle, responses are reported in the form of differences com-pared with the baseline choice between F and J, because for everypayoff set, the �-only form normalizes perceptions relative to thisfully scaled-up pair. To illustrate how to read Table 2, the rowreporting the fully scaled-up choice between F and J in the LHTshows that 109 of the 134 respondents chose the safer option (inthis case, a sure sum). To normalize, 109/134 is therefore set equalto 0.

When the distance between lotteries falls to 0.75 of the fullyscaled-up distance (i.e., in the rows reporting F vs. H and G vs. J),the –10 and –8 figures indicate that in the LHT, there were 10fewer choices of the safer option in the F versus H pair (i.e., theSafer:Riskier split was 99:35) and eight fewer choices of the saferoption in the G versus J case.

On the basis of the �-only model, scaling the probabilities downfurther should increase the movement from safer to riskier in theLHT, and Table 2 gives some support for this prediction: For thethree choices where the probabilities are scaled down by 0.5(namely, C vs. E, G vs. H, and K vs. M), there were, respectively,11, 35, and 19 fewer safe choices, an average movement of justshort of –22, compared with an average of –9 for the two pairsinvolving a 0.75 scaling. On the other hand, the �-only modelwould not entail any substantial differences in the proportions ofsafer and riskier choices within a given level of scaling, whereasthere appears to be considerable variability at the 0.5 level ofscaling.

Similar remarks may be made about the further scaling down tothe 0.25 level: Across all eight of those pairs, the average move-ment is –33.625, compared with an average of –22 at the 0.5 level;but again, there is considerable variability within the 0.25 level,with the movement relative to F versus J ranging from –10 to –63.This raises the suspicion that although the �-only model may beconsistent with a clear average trend in the data, there may be atleast one other factor at work creating additional and somewhatorthogonal variability.

Now consider the evidence from the RHT. First, recall that the�-only model predicts that inverting the bS/bR ratio turns the Table1 ordering upside-down, which in this case means that the move-ments relative to F versus J should be in the opposite direction (i.e.,the signs should be reversed), and that the magnitude of thosemovements should increase as the probabilities are progressivelyscaled down.

On the first of these counts—the reversal of direction of move-ment—Table 2 shows that whereas all 13 signs were negative forthe LHT pairs, 10 of the 13 in the RHT are positive, and one iszero. Second, at the 0.75 level of scaling, the average is �14.5,whereas for the 0.5 level, it increases to �20. However, contraryto the �-only specification, the average for the eight pairs at the0.25 level falls back somewhat to �16.375.

Once again, there is considerable variability at both the 0.5 and0.25 level, including, at the 0.25 level, three pairs exhibiting eitherno movement or some movement in the opposite direction to thatexhibited by the great majority of RHT pairs. Notice that the threepairs in question (F vs. G, K vs. L, and N vs. P) are also the threedisplaying the largest negative values in the LHT.27

These are the three pairs that are liable to be affected by theso-called “bottom-edge effect.” This effect reflects the regularityobserved in many past studies and identified in a number ofeconometric analyses where it has appeared that preferences in thevicinity of the bottom edge of the M-M triangle exhibit signifi-cantly less risk aversion/greater risk seeking than in any otherregion of the triangle. (For examples of this kind of analysis anddiscussions of the related literature, see Buschena & Zilberman,2000, and Loomes, Moffatt, & Sugden, 2002.)

26 The Appendix gives an expanded version of Table 2, showing thepatterns displayed by the two subsamples who were presented with differ-ent sets of payoffs: that is, for the 68 respondents for whom x3 � £18, x2 �£12, and x1 � 0 in the LHT, together with x3 � £50, x2 � £5, and x1 �0 in the RHT and the 66 respondents for whom x3 � £15, x2 � £10, andx1 � 0 in the LHT and x3 � £70, x2 � £7, and x1 � 0 in the RHT.

27 The expanded version of Table 2 given in the Appendix shows thatthis pattern is displayed by each of the subsamples presented with differentsets of payoffs.

0

0.8

0.6

0.4

0.2

1

0.2 0.4 0.6 0.8 1

A

B

J

M

L P

NK

H

G

F

E

D

C

0

0.8

0.6

0.4

0.2

1

0.2 0.4 0.6 0.8 1

AB

J M L P

N K

H G

F

ED

C

Figure 6. The two Marschak-Machina triangles underpinning the experimental design.

917MODELING CHOICE AND VALUATION

Without a specification for any such effect and a formal modelof any interaction with other variables, it is not clear exactly howto try to separate it from the � component. However, one (no doubtoverly simplistic) strategy might be to try to net out part or all ofthe bottom-edge effect by subtracting the LHT movements fromthe RHT movements, thereby focusing attention on the role of the� component, which entails the difference between RHT move-ments and LHT movements getting larger as the probabilities arescaled down.

The right column of Table 2 shows the results of such subtrac-tion. The pair involving F versus G appears somewhat aberrant;but apart from that pair, the pattern is very much as the � com-ponent would suggest: For the three levels of scaling, the averageRHT � LHT difference changes from 23.5 to 41.67 to 55 (50 if Fvs. G is included).

Thus, although it is clear that an �-only model is not suffi-cient to capture all important patterns in the data, its ratherstriking upside-down implications appear to have good supportin every area of both triangles away from the bottom edge. ForF versus G, K versus L, and N versus P in the case wherebS/bR � 0.25, the � effect and the bottom-edge effect work inthe same direction to produce very strong movements, one ofwhich, involving the comparison of F versus J with N versus P,produces the classic form of Allais paradox/CRE, which hasbeen so widely and reliably replicated and which constitutes thesingle strongest violation of the independence axiom of EUtheory. However, when the value of bS/bR is inverted, theinfluences of � and the bottom-edge effect work in oppositedirections, with the result that the standard CRE is greatlyattenuated or, in this case, completely eliminated. At the sametime, the opposite reversals are now observed in other areas ofthe triangle away from the bottom edge. Thus although the �component cannot claim to be the only factor at work, the datathus far would seem to support its claim to have an importantand distinctive influence.

Let us now consider the other particularly distinctive implica-tion of the PRAM model: namely, a tendency for systematicviolations of transitivity both within a given triangle and acrossdifferent payoff sets. We start with betweenness cycles.

As shown in Section 2 above, the �-only model entails that forany individual for whom � � 0, there will be some range of �(yR,yS), such that there will exist some mixture T of the S(afer) andR(iskier) lotteries for which R � T, T � S, but S � R whenbS/bR � 1, or S � T, T � R, but R � S when bS/bR � 1.

Of course, a theoretical proof of existence does not mean it isnecessarily easy to observe significant amounts of supportingevidence in an experiment. The practical difficulty is thatdifferent individuals may have different values of �(yR, yS) forthe same set of three payoffs, and for many sets of payoffs,these values may not be very close to the value which wouldgive S � R. Even for two individuals with the same �(yR, yS)giving S � R, the mixtures T that lie in the critical range willvary for different values of �. In short, an experiment whichpresents everyone with the same predetermined sets of pairwisechoices may hit the critical range for only a minority of respon-dents. An experiment based on predetermined pairwise choicesmay therefore be a rather crude instrument for this particularpurpose: Testing this aspect of PRAM more thoroughly mayrequire a more sensitive instrument that can be more closelytailored to different individuals’ �(yR, yS) and � values. Withthat caveat in mind, the data from the experiment may never-theless give some indication of whether there are patterns whichare consistent with PRAM’s implications.

As Figure 6 shows, there is a betweenness triple involving C, D,and E and another involving K, L, and M. There are also fourlotteries on the same line in the form of F, G, H, and J. There arefour possible permutations of three from these four, and (bearingin mind that these are not independent of each other) Table 3reports the numbers of cycles from all four as well as from CDEand KLM for each triangle.

Table 2Choices for Different Pairs of Lotteries From Figure 6

Frequency of choice of S

Value of �(bS, bR) PairLHT

bS/bR � 0.25RHT

bS/bR � 4.0Difference:

RHT � LHT

bS/bR(1)� F vs. J 0 � 109/134 0 � 87/134 0

bS/bR(0.75)� F vs. H �10 �8 18

G vs. J �8 �21 29bS/bR

(0.5)� C vs. E �11 �36 47G vs. H �35 �18 53K vs. M �19 �6 25

bS/bR(0.25)� A vs. B �10 �36 46

C vs. D �22 �34 56D vs. E �29 �27 56F vs. G �39 �24 15H vs. J �27 �32 59K vs. L �63 �9 54L vs. M �20 �35 55N vs. P �59 0 59

Note. LHT � left-hand triangle; RHT � right-hand triangle.

918 LOOMES

In the LHT, PRAM entails a tendency for the riskier options to bechosen more frequently at the 0.25 level of scaling than at the 0.5 level,which in turn entails the cycle listed second in each pair being observedmore frequently than the cycle listed first (i.e., C � E, E � D, D � Coccurring more frequently than C � D, D � E, E � C, and so on). Theasymmetry is in line with this prediction in every case. For the RHT,PRAM entails the opposite asymmetry. Although the numbers are small,this prediction is also borne out in five of the six comparisons.28

Given the modest numbers in any one of these instances,another perspective can be obtained by taking the individualrespondent as the unit of analysis. For each triangle, the 134respondents can be divided into four categories: those whoexhibit no cycles in any of the six instances, those who exhibitat least one cycle only in the direction consistent with PRAM,those who exhibit at least one cycle only in the directionopposite to that entailed by PRAM, and those who exhibit atleast one cycle in each direction.

For the LHT, the breakdown for the four categories is 71, 39, 12,and 12, respectively; whereas for the RHT, the breakdown is 88,31, 9, and 6. So in both cases, the numbers exhibiting cyclesconsistent with PRAM are considerably greater than those exhib-iting the unpredicted cycles. This is a pattern that no transitivetheory can readily accommodate.

The other predicted violation of transitivity entailed byPRAM—the predominance of regret cycles in scaled-up lotteries,together with the predominance of the opposite similarity cycles inscaled-down lotteries—was also investigated in the experiment.This involved four scaled-up lotteries and their four scaled-downcounterparts, as shown in Table 4.

To provide variety and complement the different sets of payoffsused in the triangles, for 68 of the respondents, the payoffs herewere w � £9, x � £15, y � £25, and z � £45, whereas for theother 66 respondents, w � £8, x � £14, y � £21, and z � £35.There were no systematic differences in the distributions ofchoices produced by the different payoff sets, so the data from bothhave been pooled for the purposes of Table 5, which reports thenumbers of cycles of each kind for each possible triple.29

Because each choice pair appears in two triples, the abovepatterns cannot be considered completely independent of oneanother, but the picture is the same whichever triple we take: For

the scaled-up lotteries, regret cycles are more frequent, whereas forthe scaled-down counterparts, similarity cycles predominate. Ineach case, a chi-squared test rejects the null that there is nosignificant difference between the patterns of asymmetry when wecompare a scaled-up triple with its scaled-down counterpart (inone case, p � .05, in one case p � .01, and in two cases p � .001).

At the level of the individual, we can divide the 134 respondentsinto four categories: those who exhibit no cycles in any of the fourtriples, those who exhibit at least one cycle only in the regret direction,those who exhibit at least one cycle only in the similarity direction,and those who exhibit at least one cycle in each direction. For thescaled-up lotteries, the respective frequencies are 94, 26, 7, and 7,whereas for the scaled-down lotteries, the corresponding frequenciesare 69, 15, 48, and 2, a clear switch from the predominance of regretcycles over similarity cycles in the scaled-up cases to the oppositeasymmetry when the lotteries are scaled down. This is a very similarpicture to the one reported in Day and Loomes (2009) and appears toconfirm the robustness of a pattern which is consistent with PRAMbut not with regret theory nor any model entailing transitivity.

Overall, then, the results from this experiment appear to givesome considerable (although not unqualified) support to thePRAM model. There are two main qualifications.

First, although the �-only specification is consistent with a gooddeal of the evidence in many regions of the two triangles consid-ered above, there appears to be one other significant effect—theso-called “bottom–edge” effect—which it does not capture. Somefurther investigation of this effect is required before we canjudge how best to explain/incorporate it—for example, we mightexplore whether it applies only when the safer lottery lies strictlyon the bottom edge or whether it continues to hold for safer

28 More detailed tables showing for each triple the frequencies of the sixpatterns of choice that conform with different orderings as well as the twotypes of cycle, broken down by payoff sets, can be found in the Appendix.For the LHT, the asymmetry is in the predicted direction in nine of the 12instances, with equal numbers of both cycles in the other three cases. Forthe RHT, the asymmetry is in the predicted (opposite) direction in nineof the 12 instances, with equal numbers of both cycles in one case and withthe asymmetry in the unpredicted direction in two cases (the triple FGH foreach subsample).

29 The more detailed tables showing for each triple the frequencies of thesix patterns of choice that conform with different orderings, broken downby payoff sets, are given in the Appendix. When broken down in this way,the asymmetries are in the predicted direction in seven of the eightscaled-up cases and in the predicted (opposite) direction in every one of theeight scaled-down cases.

Table 3Betweenness Cycles

CyclesLHT

bS/bR � 0.25RHT

bS/bR � 4.0

C � D, D � E, E � C 5 7C � E, E � D, D � C 10 6F � G, G � H, H � F 2 4F � H, H � G, G � F 20 8F � G, G � J, J � F 6 8F � J, J � G, G � F 15 5F � H, H � J, J � F 2 16F � J, J � H, H � F 6 0G � H, H � J, J � G 7 6G � J, J � H, H � G 17 1K � L, L � M, M � K 6 8K � M, M � L, L � K 14 1

Note. LHT � left-hand triangle; RHT � right-hand triangle.

Table 4Parameters of the Scaled-Up and Scaled-Down Lotteries

Scaled-up Scaled-down

W: Certainty of payoff w W�: 0.25 chance of w, 0.75chance of 0

X: 0.8 chance of x, 0.2 chanceof 0

X�: 0.2 chance of x, 0.8 chanceof 0

Y: 0.6 chance of y, 0.4 chanceof 0

Y�: 0.15 chance of y, 0.85chance of 0

Z: 0.4 chance of z, 0.6 chanceof 0

Z�: 0.1 chance of z, 0.9 chanceof 0

919MODELING CHOICE AND VALUATION

lotteries located close to but just above that edge, and/or we mightsee whether it is much stronger in cases when x1 � 0 than whenx1 takes some positive value.

Second, although the use of pairwise choices between predeter-mined sets of lotteries has the advantage of simplicity, it is asomewhat blunt instrument which could miss the critical ranges ofvalues for many respondents in any sample. So the numbersexhibiting cycles in any particular case reported above are oftenquite small. Even so, when Tables 3 and 5 are taken in conjunctionwith the analysis at the level of the individual, the overall patternconstitutes a prima facie case in support of PRAM (or at the veryleast, in support of further investigation with other more sensitiveinstruments).

6. Concluding Remarks

The past 30 years have seen the development of an array of“alternative” theories which try in different ways to account forthe many well-established regularities observed in individualdecision experiments (see Starmer, 2000, for a review of “thehunt for a descriptive theory of choice under risk” andRieskamp, Busemeyer, & Mellers, 2006, for a review from amore psychological perspective).

However, no single theory has so far been able to organize morethan a (fairly limited) subset of the evidence. This has beensomething of a puzzle, because all of the regularities in questionare generated by the same kinds of people. In fact, in someexperiments, the very same group of individuals exhibit many ofthem one after the other in the same session. So it would seem thatthere really ought to be some reasonably simple model of individ-ual decision making under risk that is able to account for asubstantial proportion of the most robust regularities.

It has been argued above that PRAM (or something very muchlike it) offers a contribution to solving that puzzle by representingthe way that many participants make pairwise choices and judgeequivalences in cases where there are no more than three pay-offs—this being the nature of the great majority of experimentaldesigns. Using some fairly simple propositions about perceptionand judgment, PRAM shows how a typical sample of participantsmay, between participants, be liable to exhibit all of the followingregularities: the common ratio effect, violations of betweenness,betweenness cycles, the reflection effect and fourfold attitudes torisk, similarity cycles, regret cycles, and preference reversals in-volving both certainty and probability equivalences. Moreover, allof these results were generated without requiring any special

assumptions about framing effects, reference points, failures ofprocedural invariance, and so on.

The development of alternative decision theories during the past30 years has often been influenced by the desire to incorporate/defend particular assumptions or axioms for normative reasons.But if the experimental data are actually generated by PRAM-likeperceptions influenced by between-lottery comparisons of proba-bilities and/or payoffs, any model which disallows such between-lottery influences on normative grounds is liable to fail descrip-tively. The data simply will not fit such theories, and the price tobe paid for trying to force them into the wrong mold is that varioussupplementary assumptions or forms of special pleading have to beinvoked and/or that the estimates arising from fitting such mis-specified models could be seriously misleading.

On the other hand, it has to be acknowledged that althoughpairwise comparisons involving no more than three payoffs havebeen the staple diet of individual decision experiments, they areonly a small subset of the kinds of risky decisions which are ofinterest to psychologists, economists, and decision theorists. Whatif the kinds of between-lottery effects modeled by PRAM arespecific to—or at least, particularly pronounced in—these two-alternative, three-payoff cases? If this is the case, how far can weextrapolate from these data to other scenarios?

For example, suppose we want a model which organizes behav-ior when decision makers are choosing between a larger number ofmore complex risky prospects. Perhaps the types of pairwisecomparisons modeled by PRAM are less important in such cases:Indeed, perhaps they are superseded altogether by other judgmen-tal considerations. It might be that a model which fails on almostevery front in the special class of experimental pairwise choicescould do much better in other scenarios which bring additionaland/or different judgmental processes into play.30 This raises thepossibility that the usefulness of any particular theory as a descrip-tive model of decision behavior may depend on the characteristicsof the class of problems to which it is being applied; and differentmodels may be more or less successful in different kinds ofscenarios. At the very least, this points to a need for experimentalresearch to pay more attention not only to other areas of the M-Mtriangle and to choices connected by lines with different gradientswithin that triangle but also to choices involving more complexlotteries and/or larger choice sets.

30 There is some tentative support for this suggestion in Bateman et al.(2006) which shows that when participants were asked to rank larger setsof prospects, the usual CRE pattern, which has been so widely and stronglyfound in pairwise choice designs, was greatly attenuated.

References

Allais, M. (1953). Le comportement de l’homme rationnel devant le risque:Critique des postulats et axiomes de l’Ecole Americaine [The behaviorof rational man in the face of risk: A critique of the postulates andaxioms of the American school]. Econometrica, 21, 503–546.

Bateman, I., Day, B., Loomes, G., & Sugden, R. (2006). Ranking versuschoice in the elicitation of preferences [Working paper]. University ofEast Anglia.

Battalio, R., Kagel, J., & Jiranyakul, K. (1990). Testing between alternativemodels of choice under uncertainty: Some initial results. Journal of Riskand Uncertainty, 3, 25–50.

Table 5“Regret” and “Similarity” Cycles

Cycles Scaled up Scaled down

W � X, X � Y, Y � W 11 8W � Y, Y � X, X � W 4 15W � X, X � Z, Z � W 13 4W � Z, Z � X, X � W 3 16W � Y, Y � Z, Z � W 12 4W � Z, Z � Y, Y � W 3 21X � Y, Y � Z, Z � X 10 9X � Z, Z � Y, Y � X 6 29

920 LOOMES

Becker, G., DeGroot, M., & Marschak, J. (1963). Stochastic models ofchoice behavior. Behavioral Science, 8, 41–55.

Bell, D. (1982). Regret in decision making under uncertainty. OperationsResearch, 30, 961–981.

Bell, D. (1985). Disappointment in decision making under uncertainty.Operations Research, 33, 1–27.

Bernasconi, M. (1994). Nonlinear preferences and two-stage lotteries:Theory and evidence. Economic Journal, 104, 54–70.

Brandstatter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heu-ristic: making choices without trade-offs. Psychological Review, 113,409–432.

Budescu, D., & Weiss, W. (1987). Reflection of transitive and intransitivepreferences: A test of prospect theory. Organizational Behavior andHuman Decision Processes, 39, 184–202.

Buschena, D., & Zilberman, D. (1999). Testing the effects of similarity onrisky choice: Implications for violations of expected utility. Theory andDecision, 46, 251–276.

Buschena, D., & Zilberman, D. (2000). Generalized expected utility, het-eroscedastic error, and path dependence in risky choice. Journal of Riskand Uncertainty, 20, 67–88.

Butler, D., & Loomes, G. (2007). Imprecision as an account of the preferencereversal phenomenon. American Economic Review, 97, 277–297.

Camerer, C. (1995). Individual decision making. In J. Kagel & A. Roth(Eds.), Handbook of experimental economics (pp. 587–703). Princeton,NJ: Princeton University Press.

Day, B., & Loomes, G. (2009). Conflicting violations of transitivity andwhere they may lead us. Theory and Decision, 68, 233–242.

Fishburn, P. (1988). Nonlinear preference and utility theory. Brighton,England: Wheatsheaf.

Fishburn, P. (1982). Nontransitive measurable utility. Journal of Mathe-matical Psychology, 26, 31–67.

Gonzalez-Vallejo, C. (2002). Making trade-offs: A probabilistic andcontext-sensitive model of choice behavior. Psychological Review, 109,137–155.

Gul, F. (1991). A theory of disappointment aversion. Econometrica, 59,667–686.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis ofdecision under risk. Econometrica, 47, 263–291.

Leland, J. (1994). Generalized similarity judgments: An alternative explana-tion for choice anomalies. Journal of Risk and Uncertainty, 9, 151–172.

Leland, J. (1998). Similarity judgments in choice under uncertainty: Areinterpretation of the predictions of regret theory, Management Science,44, 659–672.

Lichtenstein, S., & Slovic, P. (1971). Reversals of preference between bidsand choices in gambling decisions. Journal of Experimental Psychology,89, 46–55.

Lindman, H., & Lyons, J. (1978). Stimulus complexity and choice incon-sistency among gambles. Organizational Behavior and Human Perfor-mance, 21, 146–159.

Loomes, G. (1988). When actions speak louder than prospects, AmericanEconomic Review, 78, 463–470.

Loomes, G. (2005). Modelling the stochastic component of behavior inexperiments: Some issues for the interpretation of data. ExperimentalEconomics, 8, 301–323.

Loomes, G. (2006). The improbability of a general, rational and descrip-tively adequate theory of decision under risk [Working paper]. Univer-sity of Warwick. Available at http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/loomes/workingpapers

Loomes, G., Moffatt, P., & Sugden, R. (2002). A microeconometric test of

alternative stochastic theories of risky choice. Journal of Risk andUncertainty, 24, 103–130.

Loomes, G., Pinto-Prades, J.-L., Abellan-Perpinan, J., & Rodrı́guez-Miguez, E. (2009). Modelling noise and imprecision in individual deci-sions [Working paper]. University of Warwick. Available at http://www2.warwick.ac.uk/fac/soc/economics/staff/academic/loomes/workingpapers

Loomes, G., Starmer, C., & Sugden, R. (1991). Observing violations oftransitivity by experimental methods. Econometrica, 59, 425–439.

Loomes, G., & Sugden, R. (1982). Regret theory: An alternative theory ofrational choice under uncertainty. Economic Journal, 92, 805–824.

Loomes, G., & Sugden, R. (1986). Disappointment and dynamic consis-tency in choice under uncertainty. Review of Economic Studies, 53,271–282.

Loomes, G., & Sugden, R. (1987). Some implications of a more generalform of regret theory. Journal of Economic Theory, 41, 270–287.

Loomes, G., & Sugden, R. (1995). Incorporating a stochastic element intodecision theories. European Economic Review, 39, 641–648.

Loomes, G., & Sugden, R. (1998). Testing different stochastic specifica-tions of risky choice. Economica, 65, 581–598.

Loomes, G., & Taylor, C. (1992). Nontransitive preferences over gains andlosses. Economic Journal, 102, 357–365.

Machina, M. (1982). ‘Expected utility’ theory without the independenceaxiom. Econometrica, 50, 277–323.

Mellers, B., & Biagini, K. (1994). Similarity and choice. PsychologicalReview, 101, 505–518.

Moffatt, P. (2005). Stochastic choice and the allocation of cognitive effort.Experimental Economics, 8, 369–388.

Quiggin, J. (1982). A theory of anticipated utility. Journal of EconomicBehavior & Organization, 3, 323–343.

Rabin, M. (2000). Risk aversion and expected utility theory: A calibrationtheorem. Econometrica, 68, 1281–1292.

Rieskamp, J., Busemeyer, J., & Mellers, B. (2006). Extending the boundsof rationality: Evidence and theories of preferential choice. Journal ofEconomic Literature, 44, 631–661.

Rubinstein, A. (1988). Similarity and decision making under risk: Is therea utility theory resolution to the Allais paradox? Journal of EconomicTheory, 46, 145–153.

Schmidt, U., Starmer, C., & Sugden, R. (2008). Third-generation prospecttheory. Journal of Risk and Uncertainty, 36, 203–223.

Seidl, C. (2002). Preference reversal. Journal of Economic Surveys, 6,621–655.

Shafir, E., Osherson, D., & Smith, E. (1993). The advantage model: Acomparative theory of evaluation and choice under risk. OrganizationalBehavior and Human Decision Processes, 55, 325–378.

Starmer, C. (2000). Developments in non-expected utility theory: The huntfor a descriptive theory of choice under risk. Journal of EconomicLiterature, 38, 332–382.

Stewart, N., Chater, N., Stott, H., & Reimers, S. (2003). Prospect relativity:How choice options influence decisions under risk. Journal of Experi-mental Psychology: General, 132, 23–46.

Tversky, A. (1969). Intransitivity of preferences. Psychological Review,76, 31–48.

Tversky, A., & Kahneman, D. (1986). Rational choice and the framing ofdecisions. Journal of Business, 59, S251–S278.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory:Cumulative representation of uncertainty. Journal of Risk and Uncer-tainty, 5, 297–323.

Tversky, A., Slovic, P., & Kahneman, D. (1990). The causes of preferencereversal. American Economic Review, 80, 204–217.

(Appendix follows)

921MODELING CHOICE AND VALUATION

Appendix

Breakdown of Results by Subsample

For each triangle, 68 respondents were presented with choiceswhere x3 � £18, x2 � £12, and x1 � 0 in the left-hand triangle(LHT), together with x3 � £50, x2 � £5, and x1 � 0 in theright-hand triangle (RHT); the other 66 faced choices where x3 �£15, x2 � £10, and x1 � 0 in the LHT and where x3 � £70, x2 �£7, and x1 � 0 in the RHT. This expanded version of what is Table2 in the main text shows the data broken down accordingly (seeTable A1).

Betweenness Cycles and Orderings

For each triangle, there are six transitive orderings consistent withsix different combinations of pairwise choice, plus two choice cycles.

In each case below (see Table A2), the letters in the left columnshould be translated into their column counterparts according to

alphabetical order. So, for example, in the case of the tripleinvolving lotteries K, L, and M in the LHT presented to 68respondents, the ordering B � A � C translates into L � K �M. So in LHT68, reading along the row labeled B � A � C anddown the column headed KLM shows that 19 respondents madechoices consistent with this ordering (i.e., L � K, L � M andK � M).

Regret and Similarity Cycles and Orderings

Table A3 should be read as the betweenness tables above. Therewere 68 respondents for whom w � £9, x � £15, y � £25, and z �£45 and 66 respondents for whom w � £8, x � £14, y � £21, andz � £35.

Table A1Choice Data From Figure 6, by Subsample

Value of �(bS, bR) Pair

Frequency of choice of SDifference:

RHT � LHT

LHT68 LHT66 RHT68 RHT66 n � 68 n � 66

bS/bR(1)� F vs. J 0 � 55/68 0 � 54/66 0 � 42/68 0 � 45/66 0 0

bS/bR(0.75)� F vs. H �10 0 �5 �3 15 3

G vs. J �10 �2 �10 �11 20 9bS/bR

(0.5)� C vs. E �2 �9 �19 �17 21 26G vs. H �22 �13 �9 �9 31 22K vs. M �13 �6 �1 �5 14 11

bS/bR(0.25)� A vs. B �5 �5 �20 �16 25 21

C vs. D �12 �10 �18 �16 30 26D vs. E �12 �17 �13 �14 25 31F vs. G �25 �14 �10 �14 15 0H vs. J �13 �14 �19 �13 32 27K vs. L �34 �29 �2 �7 32 22L vs. M �12 �8 �18 �17 30 25N vs. P �28 �31 �3 �3 25 34

Note. LHT � left-hand triangle; RHT � right-hand triangle.

922 LOOMES

Table A2Betweenness Cycles and Orderings, by Subsample

Ordering CDE FGH FGJ FHJ GHJ KLM

LHT68A � B, B � C, C � A 3 2 3 1 6 3A � B � C 26 18 22 26 21 15A � C � B 11 8 4 15 5 1B � A � C 8 4 17 9 8 19B � C � A 6 9 3 6 7 6C � A � B 3 2 1 3 1 2C � B � A 3 10 6 3 9 15C � B, B � A, A � C 8 15 12 5 11 7

LHT66A � B, B � C, C � A 2 0 3 1 1 3A � B � C 26 27 34 33 25 20A � C � B 11 12 3 16 15 2B � A � C 6 10 14 4 10 19B � C � A 3 4 5 2 4 4C � A � B 5 1 0 4 0 0C � B � A 11 7 4 5 5 11C � B, B � A, A � C 2 5 3 1 6 7

RHT68A � B, B � C, C � A 4 2 6 9 5 7A � B � C 47 27 24 34 44 28A � C � B 6 2 0 3 2 3B � A � C 4 13 14 5 5 11B � C � A 0 9 8 13 7 14C � A � B 3 1 2 1 0 2C � B � A 0 9 10 3 4 2C � B, B � A, A � C 4 5 4 0 1 1

RHT66A � B, B � C, C � A 3 2 2 7 1 1A � B � C 54 26 27 40 49 36A � C � B 4 0 0 1 2 0B � A � C 2 19 17 4 5 14B � C � A 0 7 10 7 3 11C � A � B 0 3 2 0 2 1C � B � A 1 6 7 7 4 3C � B, B � A, A � C 2 3 1 0 0 0

Note. LHT � left-hand triangle; RHT � right-hand triangle. Boldface type indicates the cases where respondents exhibited a cycle in one direction orthe other, which are the items of data that are selected and aggregated in the corresponding tables in the main text.

(Appendix continues)

923MODELING CHOICE AND VALUATION

Table A3Regret and Similarity Cycles and Orderings, by Subsample

OrderingWXY

68WXY

66WXZ

68WXZ

66WYZ

68WYZ

66XYZ

68XYZ

66

Scaled upA � B, B � C, C � A 8 3 8 5 6 6 9 1A � B � C 25 31 21 34 23 38 30 44A � C � B 3 3 8 2 4 1 6 1B � A � C 5 7 4 3 4 3 5 4B � C � A 12 8 7 9 12 6 4 2C � A � B 5 8 8 12 7 7 5 10C � B � A 8 4 9 1 10 4 5 2C � B, B � A, A � C 2 2 3 0 2 1 4 2

Scaled downA � B, B � C, C � A 6 2 0 4 0 4 4 5A � B � C 10 14 14 11 11 19 16 22A � C � B 1 2 2 1 6 2 5 2B � A � C 1 1 2 3 7 8 1 4B � C � A 6 5 6 5 3 3 3 2C � A � B 4 12 12 22 10 4 1 1C � B � A 31 24 23 13 22 14 24 15C � B, B � A, A � C 9 6 9 7 9 12 14 15

Note. Boldface type indicates the cases where respondents exhibited a cycle in one direction or the other, which are theitems of data that are selected and aggregated in the corresponding tables in the main text.

Received June 15, 2006Revision received February 3, 2010

Accepted February 5, 2010 �

924 LOOMES


Recommended