+ All Categories
Home > Documents > 01_Edwards_1954

01_Edwards_1954

Date post: 16-Nov-2015
Category:
Upload: clovis-gruner
View: 215 times
Download: 0 times
Share this document with a friend
Popular Tags:
38
PSYCHOLOGICAL BULLETIN Vol. 51, No. 4, 1954 THE THEORY OF DECISION MAKING 1 WARD EDWARDS The Johns Hopkins University Many social scientists other than psychologists try to account for the behavior of individuals. Economists and a few psychologists have pro- duced a large body of theory and a few experiments that deal with indi- vidual decision making. The kind of decision making with which this body of theory deals is as follows: given two states, A and B, into either one of which an individual may put himself, the individual chooses A in prefer- ence to B (or vice versa). For in- stance, a child standing in front of a candy counter may be considering two states. In state A the child has $0.25 and no candy. In state B the child has $0.15 and a ten-cent candy bar. The economic theory of decision making is a theory about how to pre- dict such decisions. Economic theorists have been con- cerned with this problem since the days of Jeremy Bentham (1748- 1832). In recent years the develop- ment of the economic theory of con- sumer's decision making (or, as the 1 This work was supported by Contract N5ori-166, Task Order I, between the Office of Naval Research and The Johns Hopkins University. This is Report No. 166-1-182, Project Designation No. NR 145-089, under that contract. I am grateful to the Depart- ment of Political Economy, The Johns Hop- kins University, for providing me with an office adjacent to the Economics Library while I was writing this paper. M. Allais, M. M. Flood, N. Georgescu-Roegen, K. O. May, A. Papandreou, L. J. Savage, and es- pecially C. H. Coombs have kindly made much unpublished material available to me. A number of psychologists, economists, and mathematicians have given me excellent, but sometimes unheeded, criticism. Especially helpful were C. Christ, C. H. Coombs, F. Mosteller, and L. J. Savage. 380 economists call it, the theory of con- sumer's choice) has become exceed- ingly elaborate, mathematical, and voluminous. This literature is almost unknown to psychologists, in spite of sporadic pleas in both psychological (40, 84, 103, 104) and economic (101, 102, 123, 128, 199, 202) litera- ture for greater communication be- tween the disciplines. The purpose of this paper is to re- view this theoretical literature, and also the rapidly increasing number of psychological experiments (performed by both psychologists and econo- mists) that are relevant to it. The review will be divided into five sec- tions: the theory of riskless choices, the application of the theory of risk- less choices to welfare economics, the theory of risky choices, transitivity in decision making, and the theory of games and of statistical decision functions. Since this literature is un- familiar and relatively inaccessible to most psychologists, and since I could not find any thorough bibliography on the theory of choice in the eco- nomic literature, this paper includes a rather extensive bibliography of the literature since 1930. THE THEORY OF RISKLESS CHOICES" Economic man. The method of those theorists who have been con- 2 No complete review of this literature is available. Kauder (105, 106) has reviewed the very early history of utility theory. Stigler (180) and Viner (194) have reviewed the literature up to approximately 1930. Samuel- son's book (164) contains an illuminating mathematical exposition of some of the con- tent of this theory. Allen (6) explains the con- cept of indifference curves. Schultz (172) re-
Transcript
  • PSYCHOLOGICAL BULLETINVol. 51, No. 4, 1954

    THE THEORY OF DECISION MAKING1

    WARD EDWARDSThe Johns Hopkins University

    Many social scientists other thanpsychologists try to account for thebehavior of individuals. Economistsand a few psychologists have pro-duced a large body of theory and afew experiments that deal with indi-vidual decision making. The kind ofdecision making with which this bodyof theory deals is as follows: giventwo states, A and B, into either one ofwhich an individual may put himself,the individual chooses A in prefer-ence to B (or vice versa). For in-stance, a child standing in front of acandy counter may be consideringtwo states. In state A the child has$0.25 and no candy. In state B thechild has $0.15 and a ten-cent candybar. The economic theory of decisionmaking is a theory about how to pre-dict such decisions.

    Economic theorists have been con-cerned with this problem since thedays of Jeremy Bentham (1748-1832). In recent years the develop-ment of the economic theory of con-sumer's decision making (or, as the

    1 This work was supported by ContractN5ori-166, Task Order I, between the Officeof Naval Research and The Johns HopkinsUniversity. This is Report No. 166-1-182,Project Designation No. NR 145-089, underthat contract. I am grateful to the Depart-ment of Political Economy, The Johns Hop-kins University, for providing me with anoffice adjacent to the Economics Librarywhile I was writing this paper. M. Allais,M. M. Flood, N. Georgescu-Roegen, K. O.May, A. Papandreou, L. J. Savage, and es-pecially C. H. Coombs have kindly mademuch unpublished material available to me.A number of psychologists, economists, andmathematicians have given me excellent, butsometimes unheeded, criticism. Especiallyhelpful were C. Christ, C. H. Coombs, F.Mosteller, and L. J. Savage.

    380

    economists call it, the theory of con-sumer's choice) has become exceed-ingly elaborate, mathematical, andvoluminous. This literature is almostunknown to psychologists, in spite ofsporadic pleas in both psychological(40, 84, 103, 104) and economic(101, 102, 123, 128, 199, 202) litera-ture for greater communication be-tween the disciplines.

    The purpose of this paper is to re-view this theoretical literature, andalso the rapidly increasing number ofpsychological experiments (performedby both psychologists and econo-mists) that are relevant to it. Thereview will be divided into five sec-tions: the theory of riskless choices,the application of the theory of risk-less choices to welfare economics, thetheory of risky choices, transitivity indecision making, and the theory ofgames and of statistical decisionfunctions. Since this literature is un-familiar and relatively inaccessible tomost psychologists, and since I couldnot find any thorough bibliographyon the theory of choice in the eco-nomic literature, this paper includesa rather extensive bibliography of theliterature since 1930.

    THE THEORY OF RISKLESS CHOICES"Economic man. The method of

    those theorists who have been con-2 No complete review of this literature is

    available. Kauder (105, 106) has reviewed thevery early history of utility theory. Stigler(180) and Viner (194) have reviewed theliterature up to approximately 1930. Samuel-son's book (164) contains an illuminatingmathematical exposition of some of the con-tent of this theory. Allen (6) explains the con-cept of indifference curves. Schultz (172) re-

  • THEORY OF DECISION MAKING 381

    cerned with the theory of decisionmaking is essentially an armchairmethod. They make assumptions,and from these assumptions they de-duce theorems which presumably canbe tested, though it sometimes seemsunlikely that the testing will everoccur. The most important set ofassumptions made in the theory ofriskless choices may be summarizedby saying that it is assumed that theperson who makes any decision towhich the theory is applied is aneconomic man.

    What is an economic man like? Hehas three properties, (a) He is com-pletely informed. (6) He is infinitelysensitive, (c) He is rational.

    Complete information. Economicman is assumed to know not onlywhat all the courses of action open tohim are, but also what the outcome ofany action will be. Later on, in thesections on the theory of risky choicesand on the theory of games, this as-sumption will be relaxed somewhat.(For the results of attempts to in-troduce the possibility of learninginto this picture, see 51, 77.)

    Infinite sensitivity. In most of theolder work on choice, it is assumedthat the alternatives available to anindividual are continuous, infinitelydivisible functions, that prices areinfinitely divisible, and that economicman is infinitely sensitive. The onlypurpose of these assumptions is tomake the functions that they lead to,

    views the developments up to but not includ-ing the Hicks-Allen revolution from the pointof view of demand theory. Hicks's book (87)is a complete and detailed exposition of mostof the mathematical and economic content ofthe theory up to 1939. Samuelson (167) hasreviewed the integrability problem and the re-vealed preference approach. And Wold (204,205, 206) has summed up the mathematicalcontent of the whole field for anyone who iscomfortably at home with axiom systems anddifferential equations.

    continuous and differentiable. Stone(182) has recently shown that theycan be abandoned with no seriouschanges in the theory of choice.

    Rationality. The crucial fact abouteconomic man is that he is rational.This means two things: He canweakly order the states into which hecan get, and he makes his choices soas to maximize something.

    Two things are required in orderfor economic man to be able to put allavailable states into a weak ordering.First, given any two states into whichhe can get, A and B, he must alwaysbe able to tell either that he prefersA to B, or that he prefers B to A, orthat he is indifferent between them.If preference is operationally definedas choice, then it seems unthinkablethat this requirement can ever beempirically violated. The secondrequirement for weak ordering, amore severe one, is that all prefer-ences must be transitive. If economicman prefers A to B and B to C, thenhe prefers A to C. Similarly, if he isindifferent between A and B andbetween B and C, then he is in-different between A and C. It is notobvious that transitivity will alwayshold for human choices, and experi-ments designed to find out whetheror not it does will be described in thesection on testing transitivity.

    The second requirement of ra-tionality, and in some ways the moreimportant one, is that economic manmust make his choices in such a wayas to maximize something. This isthe central principle of the theory ofchoice. In the theory of risklesschoices, economic man has usuallybeen assumed to maximize utility. Inthe theory of risky choices, he is as-sumed to maximize expected utility.In the literature on statistical de-cision making and the theory ofgames, various other fundamental

  • 382 WARD EDWARDS

    principles of decision making areconsidered, but they are all maximi-zation principles of one sort or an-other.

    The fundamental content of thenotion of maximization is that eco-nomic man always chooses the bestalternative from among those opento him, as he sees it. In more techni-cal language, the fact that economicman prefers A to B implies and isimplied by the fact that A is higherthan B in the weakly ordered setmentioned above. (Some theories in-troduce probabilities into the abovestatement, so that if A is higher thanB in the weak ordering, then eco-nomic man is more likely to choose Athan B, but not certain to choose A.)

    This notion of maximization ismathematically useful, since it makesit possible for a theory to specify aunique point or a unique subset ofpoints among those available to thedecider. It seems to me psychologi-cally unobjectionable. So many differ-ent kinds of functions can be maxi-mized that almost any point actuallyavailable in an experimental situationcan be regarded as a maximum ofsome sort. Assumptions about maxi-mization only become specific, andtherefore possibly wrong, when theyspecify what is being maximized.

    There has, incidentally, been al-most no discussion of the possibilitythat the two parts of the concept ofrationality might conflict. It is con-ceivable, for example, that it mightbe costly in effort (and therefore innegative utility) to maintain a weaklyordered preference field. Under suchcircumstances, would it be "rational"to have such a field?

    It is easy for a psychologist to pointout that an economic man who hasthe properties discussed above is veryunlike a real man. In fact, it is soeasy to point this out that psycholo-

    gists have tended to reject out ofhand the theories that result fromthese assumptions. This isn't fair.Surely the assumptions contained inHullian behavior theory (91) or inthe Estes (60) or Bush-Mosteller(36, 37) learning theories are no morerealistic than these. The most usefulthing to do with a theory is not tocriticize its assumptions but ratherto test its theorems. If the theoremsfit the data, then the theory has atleast heuristic merit. Of course, onetrivial theorem deducible from theassumptions embodied in the conceptof economic man is that in anyspecific case of choice these assump-tions will be satisfied. For instance,if economic man is a model for realmen, then real men should alwaysexhibit transitivity of real choices.Transitivity is an assumption, but itis directly testable. So are the otherproperties of economic man as amodel for real men.

    Economists themselves are some-what distrustful of economic man(119, 156), and we will see in subse-quent sections the results of a num-ber of attempts to relax these as-sumptions.

    Early utility maximization theory.The school of philosopher-economistsstarted by Jeremy Bentham andpopularized by James Mill and othersheld that the goal of human action isto seek pleasure and avoid pain.Every object or action may be con-sidered from the point of view ofpleasure- or pain-giving properties.These properties are called the utilityof the object, and pleasure is givenby positive utility and pain by nega-tive utility. The goal of action, then,is to seek the maximum utility. Thissimple hedonism of the future iseasily translated into a theory ofchoice. People choose the alternative,from among those open to them, that

  • THEORY OP DECISION MAKING 383

    leads to the greatest excess of positiveover negative utility. This notion ofutility maximization is the essence ofthe utility theory of choice. It willreappear in various forms through-out this paper. (Bohnert [30] dis-cusses the logical structure of theutility concept.)

    This theory of choice was embodiedin the formal economic analyses of allthe early great names in economics.In the hands of Jevons, Walras, andMenger it reached increasingly so-phisticated mathematical expressionand it was embodied in the thinkingof Marshall, who published the firstedition of his great Principles ofEconomics in 1890, and revised it atintervals for more than 30 yearsthereafter (137).

    The use to which utility theory wasput by these theorists was to estab-lish the nature of the demand forvarious goods. On the assumptionthat the utility of any good is amonotonically increasing negativelyaccelerated function of the amount ofthat good, it is easy to show that theamounts of most goods which a con-sumer will buy are decreasing func-tions of price, functions which areprecisely specified once the shapes ofthe utility curves are known. This isthe result the economists needed andis, of course, a testable theorem. (Formore on this, see 87, 159.)

    Complexities arise in this theorywhen the relations between theutilities of different goods are con-sidered. Jevons, Walras, Menger,and even Marshall had assumed thatthe utilities of different commoditiescan be combined into a total utilityby simple addition; this amounts toassuming that the utilities of differentgoods are independent (in spite ofthe fact that Marshall elsewhere dis-cussed the notions of competinggoods, like soap and detergents, and

    completing goods, like right and leftshoes, which obviously do not haveindependent utilities). Edgeworth(53), who was concerned with suchnonindependent utilities, pointed outthat total utility was not necessarilyan additive function of the utilitiesattributable to separate commodities.In the process he introduced the no-tion of indifference curves, and thusbegan the gradual destruction of theclassical utility theory. We shall re-turn to this point shortly.

    Although the forces of parsimonyhave gradually resulted in the elimi-nation of the classical concept ofutility from the economic theory ofriskless choices, there have been afew attempts to use essentially theclassical theory in an empirical way.Fisher (63) and Frisch (75) have de-veloped methods of measuring margi-nal utility (the change in utility [u]with an infinitesimal change inamount possessed [Q], i.e., du/dQ)from market data, by making assump-tions about the interpersonal simi-larity of consumer tastes. RecentlyMorgan (141) has used several vari-ants of these techniques, has dis-cussed mathematical and logical flawsin them, and has concluded on thebasis of his empirical results that thetechniques require too unrealisticassumptions to be workable. Thecrux of the problem is that, for thesetechniques to be useful, the com-modities used must be independent(rather than competing or complet-ing), and the broad commodity clas-sifications necessary for adequatemarket data are not independent.Samuelson (164) has shown that theassumption of independent utilities,while it does guatantee interval scaleutility measures, puts unwarrantablysevere restrictions on the nature ofthe resulting demand function. Else-where Samuelson (158) presented,

  • 384 WARD EDWARDS

    primarily as a logical and mathe-matical exercise, a method of measur-ing marginal utility by assumingsome time-discount function. Sinceno reasonable grounds can be foundfor assuming one such function ratherthan another, this procedure holds nopromise of empirical success. Mar-shall suggested (in his notion of "con-sumer's surplus") a method of utilitymeasurement that turns out to bedependent on the assumption of con-stant marginal utility of money, andwhich is therefore quite unworkable.Marshall's prestige led to extensivediscussion and debunking of thisnotion (e.g., 28), but little positivecomes out of this literature. Thur-stone (186) is currently attemptingto determine utility functions forcommodities experimentally, but hasreported no results as yet.

    Indifference curves. Edgeworth'sintroduction of the notion of in-difference curves to deal with theutilities of nonindependent goods wasmentioned above. An indifferencecurve is, in Edgeworth's formula-tion, a constant-utility curve. Sup-pose that we consider apples andbananas, and suppose that you get

    25

    _J 200_QL< 15

    10LdCO

    0 5 10 15 20 25

    NUMBER OF BANANAS

    FIG. 1. A HYPOTHETICAL INDIFFERENCE MAP

    the same amount of utility from10-apples-and-l-banana as you dofrom 6-apples-and-4-bananas. Thenthese are two points on an indiffer-ence curve, and of course there arean infinite number of other points onthe same curve. Naturally, this is notthe only indifference curve you mayhave between apples and bananas. Itmay also be true that you are in-different between 13-apples-and-5-bananas and 5-apples-and-15-banan-as. These are two points on another,higher indifference curve. A wholefamily of such curves is called an in-difference map. Figure 1 presentssuch a map. One particularly usefulkind of indifference map has amountsof a commodity on one axis andamounts of money on the other.Money is a commodity, too.

    The notion of an indifference mapcan be derived, as Edge worth derivedit, from the notion of measurableutility. But it does not have to be.Pareto (146, see also 151) was seri-ously concerned about the assump-tion that utility was measurable upto a linear transformation. He feltthat people could tell whether theypreferred to be in state A or state B,but could not tell how much theypreferred one state over the other. Inother words, he hypothesized a utilityfunction measurable only on an ordi-nal scale. Let us follow the usualeconomic language, and call utilitymeasured on an ordinal scale ordinalutility, and utility measured on aninterval scale, cardinal utility. It ismeaningless to speak of the slope, ormarginal utility, of an ordinal utilityfunction; such a function cannot bedifferentiated. However, Pareto sawthat the same conclusions which hadbeen drawn from marginal utilitiescould be drawn from indifferencecurves. An indifference map can bedrawn simply by finding all the com-

  • THEORY OF DECISION MAKING 385

    binations of the goods involvedamong which the person is indiffer-ent. Pareto's formulation assumesthat higher indifference curves havegreater utility, but does not need tospecify how much greater that utilityis.

    It turns out to be possible to de-duce from indifference curves all ofthe theorems that were originally de-duced from cardinal utility measures.This banishing of cardinal utility wasfurthered considerably by splendidmathematical papers by Johnson(97) and Slutsky (177). (In moderneconomic theory, it is customary tothink of an w-dimensional commodityspace, and of indifference hyper-planes in that space, each such hyper-plane having, of course, n 1 dimen-sions. In order to avoid unsatisfactorypreference structures, it is necessaryto assume that consumers alwayshave a complete weak ordering for allcommodity bundles, or points in com-modity space. Georgescu-Roegen[76], Wold [204, 205, 206, 208],Houthakker [90], and Samuelson[167] have discussed this problem.)

    Pareto was not entirely consistentin his discussion of ordinal utility.Although he abandoned the assump-tion that its exact value could beknown, he continued to talk aboutthe sign of the marginal utility co-efficient, which assumed that someknowledge about the utility functionother than purely ordinal knowledgewas available. He also committedother inconsistencies. So Hicks andAllen (88), in 1934, were led to theirclassic paper in which they attemptedto purge the theory of choice of itslast introspective elements. Theyadopted the conventional economicview about indifference curves as de-termined from a sort of imaginaryquestionnaire, and proceeded to de-rive all of the usual conclusions about

    consumer demand with no referenceto the notion of even ordinal utility(though of course the notion of anordinal scale of preferences was stillembodied in their derivation of in-difference curves). This paper wasfor economics something like the be-haviorist revolution in psychology.

    Lange (116), stimulated by Hicksand Allen, pointed out another incon-sistency in Pareto. Pareto had as-sumed that if a person consideredfour states, A, B, C, and D, he couldjudge whether the difference betweenthe utilities of A and B was greaterthan, equal to, or less than the differ-ence between the utilities of C and D.Lange pointed out that if such acomparison was possible for any A,B, C, and D, then utility was car-dinally measurable. Since it seemsintrospectively obvious that suchcomparisons can be made, this paperprovoked a flood of protest and com-ment (7, 22, 117, 147, 209). Never-theless, in spite of all the comment,and even in spite of skepticism by adistinguished economist as late as1953 (153), Lange is surely right.Psychologists should know this atonce; such comparisons are the basisof the psychophysical Method ofEqual Sense Distances, from whichan interval scale is derived. (Samuel-son [162] has pointed out a very in-teresting qualification. Not onlymust such judgments of difference bepossible, but they must also be transi-tive in order to define an intervalscale.) But since such judgments ofdifferences did not seem to be neces-sary for the development of consumerdemand theory, Lange's paper didnot force the reinstatement of cardi-nal utility.

    Indeed, the pendulum swungfurther in the behavioristic direction.Samuelson developed a new analyticfoundation for the theory of con-

  • 386 WARD EDWARDS

    sumer behavior, the essence of whichis that indifference curves and hencethe entire structure of the theory ofconsumer choice can be derivedsimply from observation of choicesamong alternative groups of pur-chases available to a consumer (160,161). This approach has been ex-tensively developed by Samuelson(164, 165, 167, 169) and others (50,90, 125, 126). The essence of the ideais that each choice defines a pointand a slope in commodity space.Mathematical approximation meth-ods make it possible to combine awhole family of such slopes into anindifference hyperplane. A family ofsuch hyperplanes forms an indiffer-ence "map."

    In a distinguished but inaccessibleseries of articles, Wold (204, 205, 206;see also 208 for a summary presenta-tion) has presented the mathematicalcontent of the Pareto, Hicks and Al-len, and revealed preference (Samu-elson) approaches, as well as Cassel'sdemand function approach, and hasshown that if the assumption aboutcomplete weak ordering of bundles ofcommodities which was discussedabove is made, then all these ap-proaches are mathematically equiva-lent.

    Nostalgia for cardinal utility. Thecrucial reason for abandoning cardi-nal utility was the argument of theordinalists that indifference curveanalysis in its various forms could doeverything that cardinal utility coulddo, with fewer assumptions. So faras the theory of riskless choice is con-cerned, this is so. But this is only anargument for parsimony, and parsi-mony is not always welcome. Therewas a series of people who, for onereason or another, wanted to rein-state cardinal utility, or at leastmarginal utility. There were severalmathematically invalid attempts to

    show that marginal utility could bedefined even in an ordinal-utilityuniverse (23, 24, 163; 25, 114).Knight (110), in 1944, argued ex-tensively for cardinal utility; hebased his arguments in part on in-trospective considerations and in parton an examination of psychophysicalscaling procedures. He stimulated anumber of replies (29, 42; 111). Re-cently Robertson (154) pleaded forthe reinstatement of cardinal utilityin the interests of welfare economics(this point will be discussed againbelow). But in general the indiffer-ence curve approach, in its variousforms, has firmly established itself asthe structure of the theory of risklesschoice.

    Experiments on indifference curves.Attempts to measure marginal utilityfrom market data were discussedabove. There have been three experi-mental attempts to measure indiffer-ence curves. Schultz, who pioneeredin deriving statistical demand curves,interested his colleague at the Univer-sity of Chicago, the psychologistThurstone, in the problem of in-difference curves. Thurstone (185)performed a very simple experiment.He gave one subject a series of com-binations of hats and overcoats, andrequired the subject to judge whetherhe preferred each combination to astandard. For instance, the subjectjudged whether he preferred eighthats and eight overcoats to fifteenhats and three overcoats. The sameprocedure was repeated for hats andshoes, and for shoes and overcoats.The data were fitted with indifferencecurves derived from the assumptionsthat utility curves fitted Fechner'sLaw and that the utilities of thevarious objects were independent.Thurstone says that Fechner's Lawfitted the data better than the otherpossible functions he considered, but

  • THEORY OF DECISION MAKING 387

    presents no evidence for this asser-tion. The crux of the experimentwas the attempt to predict the in-difference curves between shoes andovercoats from the other indifferencecurves. This was done by using theother two indifference curves to inferutility functions for shoes and forovercoats separately, and then usingthese two utility functions to predictthe total utility of various amountsof shoes and overcoats jointly. Theprediction worked rather well. Thejudgments of the one subject used areextraordinarily orderly; there is verylittle of the inconsistency and vari-ability that others working in thisarea have found. Thurstone says,"The subject . . . was entirely naiveas regards the psychophysical prob-lem involved and had no knowledgewhatever of the nature of the curvesthat we expected to find" (18S, p.154). He adds, "I selected as subjecta research assistant in my laboratorywho knew nothing about psycho-physics. Her work was largelyclerical in nature. She had a veryeven disposition, and I instructed herto take an even motivational attitudeon the successive occasions . . . I wassurprised at the consistency of thejudgments that I obtained, but I ampretty sure that they were the resultof careful instruction to assume a uni-form motivational attitude."3 Fromthe economist's point of view, themain criticism of this experiment isthat it involved imaginary ratherthan real transactions (200).

    The second experimental measure-ment of indifference curves is reportedby the economists Rousseas and Hart(157). They required large numbersof students to rank sets of three com-binations of different amounts of ba-

    8 Thurstone, L. L. Personal communication,December 7, 1953.

    con and eggs. By assuming that allstudents had the same indifferencecurves, they were able to derive a com-posite indifference map for bacon andeggs. No mathematical assumptionswere necessary, and the indifferencemap is not given mathematical form.Some judgments were partly or com-pletely inconsistent with the final map,but not too many. The only conclu-sion which this experiment justifies isthat it is possible to derive such acomposite indifference map.

    The final attempt to measure anindifference curve is a very recent oneby the psychologists Coombs andMilholland (49). The indifferencecurve involved is one between riskand value of an object, and so will bediscussed below in the section on thetheory of risky decisions. It is men-tioned here because the same meth-ods (which show only that the in-difference curve is convex to theorigin, and so perhaps should not becalled measurement) could equallywell be applied to the determinationof indifference curves in risklesssituations.

    Mention should be made of theextensive economic work on statisti-cal demand curves. For some reasonthe most distinguished statistical de-mand curve derivers feel it necessaryto give an account of consumer'schoice theory as a preliminary to thederivation of their empirical demandcurves. The result is that the twobest books in the area (172, 182) areeach divided into two parts; the firstis a general discussion of the theoryof consumer's choice and the seconda quite unrelated report of statisticaleconomic work. Stigler (179) hasgiven good reasons why the statisticaldemand curves are so little related tothe demand curves of economictheory, and Wallis and Friedman(200) argue plausibly that this state

  • 388 WARD EDWARDS

    of affairs is inevitable. At any rate,there seems to be little prospect ofusing large-scale economic data to fillin the empirical content of the theoryof individual decision making.

    Psychological comments. There areseveral commonplace observationsthat are likely to occur to psycholo-gists as soon as they try to apply thetheory of riskless choices to actualexperimental work. The first is thathuman beings are neither perfectlyconsistent nor perfectly sensitive.This means that indifference curvesare likely to be observable as in-difference regions, or as probabilitydistributions of choice around acentral locus. It would be easy toassume that each indifference curverepresents the modal value of a nor-mal sensitivity curve, and that choicesshould have statistical propertiespredictable from that hypothesis asthe amounts of the commodities(locations in product space) arechanged. This implies that the defi-nition of indifference between twocollections of commodities should bethat each collection is preferred overthe other 50 per cent of the time.Such a definition has been proposedby an economist (108), and used inexperimental work by psychologists(142). Of course, SO per cent choicehas been a standard psychologicaldefinition of indifference since thedays of Fechner.

    Incidentally, failure on the part ofan economist to understand that ajust noticeable difference (j.n.d.) is astatistical concept has led him toargue that the indifference relation isintransitive, that is, that if A is in-different to B and B is indifferent toC, then A need not be indifferent to C(8, 9, 10). He argues that if A and Bare less than one j.n.d. apart, then Awill be indifferent to B; the same ofcourse is true of B and C; but A and

    C may be more than one j.n.d. apart,and so one may be preferred to theother. This argument is, of course,wrong. If A has slightly more utilitythan B, then the individual willchoose A in preference to B slightlymore than SO per cent of the time,even though A and B are less thanone j.n.d. apart in utility. The 50 percent point is in theory a preciselydefined point, not a region. It may infact be difficult to determine becauseof inconsistencies in judgments andbecause of changes in taste with time.

    The second psychological observa-tion is that it seems impossible evento dream of getting experimentallyan indifference map in w-dimensionalspace where n is greater than 3. Eventhe case of w = 3 presents formidableexperimental problems. This is lessimportant to the psychologist whowants to use the theory of choice torationalize experimental data thanto the economist who wants to de-rive a theory of general static equilib-rium.

    Experiments like Thurstone's (185)involve so many assumptions that itis difficult to know what their empiri-cal meaning might be if these assump-tions were not made. Presumably,the best thing to do with such ex-periments is to consider them as testsof the assumption with the least facevalidity. Thurstone was willing toassume utility maximization and in-dependence of the commodities in-volved (incidentally, his choice ofcommodities seems singularly un-fortunate for justifying an assump-tion of independent utilities), and soused his data to construct a utilityfunction. Of course, if only ordinalutility is assumed, then experimentalindifference curves cannot be usedthis way. In fact, in an ordinal-utility universe neither of the prin-cipal assumptions made by Thurstone

  • THEORY OF DECISION MAKING 389

    can be tested by means of experi-mental indifference curves. So theassumption of cardinal utility, thoughnot necessary, seems to lead to con-siderably more specific uses for ex-perimental data.

    At any rate, from the experimentalpoint of view the most interestingquestion is: What is the observedshape of indifference curves betweenindependent commodities? This ques-tion awaits an experimental answer.

    The notion of utility is very similarto the Lewinian notion of valence(120, 121). Lewin conceives ofvalence as the attractiveness of anobject or activity to a person (121).Thus, psychologists might considerthe experimental study of utilities tobe the experimental study of valences,and therefore an attempt at quantify-ing parts of the Lewinian theoreticalschema.

    APPLICATION OF THE THEORY OFRISKLESS CHOICES TO WEL-

    FARE ECONOMICS4

    The classical utility theorists as-sumed the existence of interpersonallycomparable cardinal utility. Theywere thus able to find a simple an-swer to the question of how to de-termine the best economic policy:That economic policy is best whichresults in the maximum total utility,summed over all members of theeconomy.

    The abandonment of interpersonalcomparability makes this answer use-less. A sum is meaningless if theunits being summed are of varyingsizes and there is no way of reducingthem to some common size. This

    4 The discussion of welfare economics givenin this paper is exceedingly sketchy. For apicture of what the complexities of modernwelfare economics are really like (see 11, 13,14, 86, 118, 124, 127, 139, 140, 148, 154, 155,166, 174).

    point has not been universally recog-nized, and certain economists (e.g.,82, 154) still defend cardinal (but notinterpersonally comparable) utilityon grounds of its necessity for wel-fare economics.

    Pareto's principle. The abandon-ment of interpersonal comparabilityand then of cardinal utility produceda search for some other principle tojustify economic policy. Pareto(146), who first abandoned cardinalutility, provided a partial solution.He suggested that a change shouldbe considered desirable if it lefteveryone at least as well off as hewas before, and made at least oneperson better off.

    Compensation principle. Pareto'sprinciple is fine as far as it goes, butit obviously does not go very far.The economic decisions which can bemade on so simple a principle are fewand insignificant. So welfare eco-nomics languished until Kaldor (98)proposed the compensation prin-ciple. This principle is that if it ispossible for those who gain from aneconomic change to compensate thelosers for their losses and still havesomething left over from their gains,then the change is desirable. Ofcourse, if the compensation is actuallypaid, then this is simply a case ofPareto's principle.

    But Kaldor asserted that thecompensation need not actually bemade; all that was necessary wasthat it could be made. The fact thatit could be made, according toKaldor, is evidence that the changeproduces an excess of good over harm,and so is desirable. Scitovsky (173)observed an inconsistency in Kaldor'sposition: Some cases could arise inwhich, when a change from A to Bhas been made because of Kaldor'scriterion, then a change back from Bto A would also satisfy Kaldor's

  • 390 WARD EDWARDS

    criterion. It is customary, therefore,to assume that changes which meetthe original Kaldor criterion are onlydesirable if the reverse change doesnot also meet the Kaldor criterion.

    It has gradually become obviousthat the Kaldor-Scitovsky criteriondoes not solve the problem of welfareeconomics (see e.g., 18, 99). It as-sumes that the unpaid compensationdoes as much good to the person whogains it as it would if it were paid tothe people who lost by the change.For instance, suppose that an in-dustrialist can earn $10,000 a yearmore from his plant by using a newmachine, but that the introduction ofthe machine throws two people ir-retrievably out of work. If the salaryof each worker prior to the changewas $4,000 a year, then the in-dustrialist could compensate theworkers and still make a profit. Butif he does not compensate the work-ers, then the added satisfaction hegets from his extra $10,000 may bemuch less than the misery he pro-duces in his two workers. This ex-ample only illustrates the principle;it does not make much sense in thesedays of progressive income taxes, un-employment compensation, high em-ployment, and strong unions.

    Social welfare functions. From hereon the subject of welfare economicsgets too complicated and too remotefrom psychology to merit extensiveexploration in this paper. The linethat it has taken is the assumptionof a social welfare function (21), afunction which combines individualutilities in a way which satisfiesPareto's principle but is otherwiseundefined. In spite of its lack ofdefinition, it is possible to drawcertain conclusions from such a func-tion (see e.g., 164). However, Arrow(14) has recently shown that a socialwelfare function that meets certain

    very reasonable requirements aboutbeing sensitive in some way to thewishes of all the people affected,etc., cannot in general be found inthe absence of interpersonally com-parable utilities (see also 89).

    Psychological comment. Some econ-omists are willing to accept thefact that they are inexorably com-mitted to making moral judgmentswhen they recommend economicpolicies (e.g., 152, 153). Others stilllong for the impersonal amorality of autility measure (e.g., 154). Howeverdesirable interpersonally comparablecardinal utility may be, it seemsUtopian to hope that any experi-mental procedure will ever give in-formation about individual utilitiesthat could be of any practical use inguiding large-scale economic policy.

    THE THEORY OF RISKY CHOICES"Risk and uncertainty. Economists

    and statisticians distinguish between

    6 Strotz (183) and Alchian (1) present non-technical and sparkling expositions of the vonNeumann and Morgenstern utility measure-ment proposals. Georgescu-Roegen (78) criti-cally discusses various axiom systems so as tobring some of the assumptions underlying thiskind of cardinal utility into clear focus. Allais(3) reviews some of these ideas in the course ofcriticizing them, Arrow (12, 14) reviews partsof the field.

    There is a large psychological literature onone kind of risky decision making, the kindwhich results when psychologists use partialreinforcement. This literature has been re-viewed by Jenkins and Stanley (96). Recentlya number of experimenters, including Jarrett(95), Flood (69, 70), Bilodeau (27), and my-self (56) have been performing experiments onhuman subjects who are required to chooserepetitively between two or more alternatives,each of which has a probability of rewardgreater than zero and less than one. The prob-lems raised by these experiments are too com-plicated and too far removed from conven-tional utility theory to be dealt with in thispaper. This line of experimentation may even-tually provide the link which ties togetherutility theory and reinforcement theory.

  • THEORY OF DECISION MAKING 391

    risk and uncertainty. There does notseem to be any general agreementabout which concept should be as-sociated with which word, but thefollowing definitions make the mostimportant distinctions.

    Almost everyone would agree thatwhen I toss a coin the probabilitythat I will get a head is .5. A proposi-tion about the future to which a num-ber can be attached, a number thatrepresents the likelihood that theproposition is true, may be called afirst-order risk. What the rules are forattaching such numbers is a muchdebated question, which will beavoided in this paper.

    Some propositions may depend onmore than one probability distribu-tion. For instance, I may decide thatif I get a tail, I will put the coin backin my pocket, whereas if I get a head,I will toss it again. Now, the prob-ability of the proposition "I will geta head on my second toss" is a func-tion of two probability distributions,the distribution corresponding to thefirst toss and that corresponding tothe second toss. This might be calleda second-order risk. Similarly, risks ofany order may be constructed. It is amathematical characteristic of allhigher-order risks that they may becompounded into first-order risks bymeans of the usual theorems for com-pounding probabilities. (Some econo-mists have argued against this pro-cedure [83], essentially on the groundsthat you may have more informationby the time the second risk comesaround. Such problems can best bedealt with by means of von Neumannand Morgenstern's [197] concept ofstrategy, which is discussed below.They become in general problems ofuncertainty, rather than risk.)

    Some propositions about the futureexist to which no generally acceptedprobabilities can be attached. What

    is the probability that the followingproposition is true: Immediatelyafter finishing this paper, you willdrink a glass of beer? Surely it isneither impossible nor certain, so itought to have a probability betweenzero and one, but it is impossible foryou or me to find out what that prob-ability might be, or even to set upgenerally acceptable rules about howto find out. Such propositions areconsidered cases of uncertainty, ratherthan of risk. This section deals onlywith the subject of first-order risks.The subject of uncertainty will ariseagain in connection with the theoryof games.

    Expected utility maximization. Thetraditional mathematical notion fordealing with games of chance (and sowith risky decisions) is the notionthat choices should be made so as tomaximize expected value. The ex-pected value of a bet is found bymultiplying the value of each possibleoutcome by its probability of oc-currence and summing these prod-ucts across all possible outcomes. Insymbols:

    where p stands for probability, $stands for the value of an outcome,and pi+p*+ +n = l.

    The assumption that people ac-tually behave the way this mathe-matical notion says they should iscontradicted by observable behaviorin many risky situations. People arewilling to buy insurance, even thoughthe person who sells the insurancemakes a profit. People are willing tobuy lottery tickets, even though thelottery makes a profit. Considerationof the problem of insurance and of theSt. Petersburg paradox led DanielBernoulli, an eighteenth centurymathematician, to propose that theycould be resolved by assuming that

  • 392 WARD EDWARDS

    people act so as to maximize expectedutility, rather than expected value(26). (He also assumed that utilityfollowed a function that more than acentury later was proposed by Fech-ner for subjective magnitudes ingeneral and is now called Fechner'sLaw.) This was the first use of thenotion of expected utility.

    The literature on risky decisionmaking prior to 1944 consists pri-marily of the St. Petersburg paradoxand other gambling and probabilityliterature in mathematics, some liter-ary discussion in economics (e.g., 109,187), one economic paper on lotteries(189), and the early literature of thetheory of games (31, 32, 33, 34, 195),which did not use the notion ofutility. The modern period in thestudy of risky decision making beganwith the publication in 1944 of vonNeumann and Morgenstern's monu-mental book Theory of Games andEconomic Behavior (196, see also197), which we will discuss more fullylater. Von Neumann and Morgen-stern pointed out that the usual as-sumption that economic man canalways say whether he prefers onestate to another or is indifferent be-tween them needs only to be slightlymodified in order to imply cardinalutility. The modification consists ofadding that economic man can alsocompletely order probability com-binations of states. Thus, supposethat an economic man is indifferentbetween the certainty of $7.00 and a50-50 chance of gaining $10.00 ornothing. We can assume that hisindifference between these two pros-pects means that they have the sameutility for him. We may define theutility of $0.00 as zero utiles (theusual name for the unit of utility, justas sone is the name for the unit ofauditory loudness), and the utilityof $10.00 as 10 utiles?, These two

    arbitrary definitions correspond todefining the two undefined constantswhich are permissible since cardinalutility is measured only up to a lineartransformation. Then we may cal-culate the utility of $7.00 by usingthe concept of expected utility as fol-lows:

    17(17.00) = .5 7($10.00) +.5 E7($0.00)= .5(10)+.5(0) = 5.

    Thus we have determined the cardi-nal utility of $7.00 and found that itis 5 utiles. By varying the probabil-ities and by using the already foundutilities it is possible to discover theutility of any other amount of money,using only the two permissible arbi-trary definitions. It is even moreconvenient if instead of +$10.00, $10.00 or some other loss is used asone of the arbitrary utilities.

    A variety of implications is em-bodied in this apparently simple no-tion. In the attempt to examine andexhibit clearly what these implica-tions are, a number of axiom systems,differing from von Neumann andMorgenstern's but leading to thesame result, have been developed(73, 74, 85, 135, 136, 171). Thispaper will not attempt to go intothe complex discussions (e.g., 130,131, 168, 207) of these various al-ternative axiom systems. One recentdiscussion of them (78) has con-cluded, on reasonable grounds, thatthe original von Neumann and Mor-genstern set of axioms is still the best.

    It is profitable, however, to ex-amine what the meaning of this no-tion is from the empirical point ofview if it is right. First, it means thatrisky propositions can be orderedin desirability, just as riskless onescan. Second, it means that the con-cept of expected utility is behavior-ally meaningful. Finally, it means

    choices among risky alternatives

  • THEORY OF DECISION MAKING 393

    are made in such a way that theymaximize expected utility.

    If this model is to be used to pre-dict actual choices, what could gowrong with it? It might be that theprobabilities by which the utilitiesare multiplied should not be the ob-jective probabilities; in other words, adecider's estimate of the subjectiveimportance of a probability may notbe the same as the numerical value ofthat probability. It might be thatthe method of combination of proba-bilities and values should not besimple multiplication. It might bethat the method of combination ofthe probability-value products shouldnot be simple addition. It might bethat the process of gambling hassome positive or negative utility ofits own. It might be that the wholeapproach is wrong, that people justdo not behave as if they were tryingto maximize expected utility. Weshall examine some of these pos-sibilities in greater detail below.

    Economic implications of maximiz-ing expected utility. The utility-measurement notions of von Neu-mann and Morgenstern were en-thusiastically welcomed by manyeconomists (e.g., 73, 193), though afew (e.g., 19) were at least tempo-rarily (20) unconvinced. The mostinteresting economic use of them wasproposed by Friedman and Savage(73), who were concerned with thequestion of why the same person whobuys insurance (with a negative ex-pected money value), and therefore iswilling to pay in order not to takerisks, will also buy lottery tickets(also with a negative expected moneyvalue) in which he pays in order totake risks. They suggested that thesefacts could be reconciled by a doublyinflected utility curve for money, likethat in Fig. 2. If / represents theperson's current income, then he is

    clearly willing to accept "fair" in-surance (i.e., insurance with zero ex-pected money value) because theserious loss against which he is insur-ing would have a lower expectedutility than the certain loss of theinsurance premium. (Negatively ac-celerated total utility curves, likethat from the origin to /, are whatyou get when marginal utility de-creases; thus, decreasing marginal

    UJ

    DOLLARSFIG. 2. HYPOTHETICAL UTILITY CURVE FOR

    MONEY, PROPOSED BY FRIEDMAN AND SAVAGE

    utility is consistent with the avoid-ance of risks.) The person would alsobe willing to buy lottery tickets, sincethe expected utility of the lotteryticket is greater than the certain lossof the cost of the ticket, because ofthe rapid increase in the height of theutility function. Other considera-tions make it necessary that theutility curve turn down again. Notethat this discussion assumes thatgambling has no inherent utility.

    Markowitz (132) suggested an im-portant modification in this hy-pothesis. He suggested that theorigin of a person's utility curve formoney be taken as his customary

  • 394 WARD EDWARDS

    financial status, and that on bothsides of the origin the curve be as-sumed first concave and then convex.If the person's customary state ofwealth changes, then the shape of hisutility curve will thus remain gen-erally the same with respect to wherehe now is, and so his risk-taking be-havior will remain pretty much thesame instead of changing with everychange of wealth as in the Friedman-Savage formulation.

    Criticism of the expected-utilitymaximization theory. It is fairly easyto construct examples of behaviorthat violate the von Neumann-Morgenstern axioms (for a partic-ularly ingenious example, see 183). Itis especially easy to do so when theamounts of money involved are verylarge, or when the probabilities orprobability differences involved areextremely small. Allais (5) has con-structed a questionnaire full of itemsof this type. For an economist in-terested in using these axioms as abasis for a completely general theoryof risky choice, these examples maybe significant. But psychological in-terest in this model is more modest.The psychologically important ques-tion is: Can such a model be used toaccount for simple experimental ex-amples of risky decisions?

    Of course a utility function derivedby von Neumann-Morgenstern meansis not necessarily the same as a classi-cal utility function (74, 203; see also82).

    Experiment on the von Neumann-Morgenstern model. A number of ex-periments on risky decision makinghave been performed. Only the firstof them, by Mosteller and Nogee(142), has been in the simple frame-work of the model described above.All the rest have in some way oranother centered on the concept ofprobabilities effective for behavior

    which differ in some way from theobjective probabilities, as well as onutilities different from the objectivevalues of the objects involved.

    Mosteller and Nogee (142) carriedout the first experiment to apply thevon Neumann-Morgenstern model.They presented Harvard undergradu-ates and National Guardsmen withbets stated in terms of rolls at pokerdice, which each subject could acceptor refuse. Each bet gave a "hand"at poker dice. If the subject couldbeat the hand, he won an amountstated in the bet. If not, he lost anickel. Subjects played with $1.00,which they were given at the be-ginning of each experimental session.They were run together in groups offive; but each decided and rolled thepoker dice for himself. Subjects wereprovided with a table in which themathematically fair bets were shown,so that a subject could immediatelytell by referring to the table whethera given bet was fair, or better orworse than fair.

    In the data analysis, the first stepwas the determination of "indiffer-ence offers." For each probabilityused and for each player, the amountof money was found for which thatplayer would accept the bet SO percent of the time. Thus equality wasdefined as SO per cent choice, as itis likely to be in all psychological ex-periments of this sort. Then theutility of $0.00 was defined as 0utiles, and the utility of losing anickel was defined as 1 utile. Withthese definitions and the probabilitiesinvolved, it was easy to calculate theutility corresponding to the amountof money involved in the indifferenceoffer. It turned out that, in genera),the Harvard undergraduates haddiminishing marginal utilities, whilethe National Guardsmen had in-creasing marginal utilities.

  • THEORY OF DECISION MAKING 395

    The utilities thus calculated wereused in predicting the results of morecomplex bets. It is hard to evaluatethe success of these predictions. Atany rate, an auxiliary paired-comparisons experiment showed thatthe hypothesis that subjects maxi-mized expected utility predictedchoices better than the hypothesisthat subjects maximized expectedmoney value.

    The utility curve that Mostellerand Nogee derive is different fromthe one Friedman and Savage (73)were talking about. Suppose that asubject's utility curve were of theFriedman-Savage type, as in Fig1. 2,and that he had enough money to puthim at point P. If he now wins orloses a bet, then he is moved to adifferent location on the indifferencecurve, say Q. (Note that the amountsof money involved are much smallerthan in the original Friedman-Savageuse of this curve.) However, the con-struction of a Mosteller-Nogee utilitycurve assumes that the individual isalways at the same point on hisutility curve, namely the origin. Thismeans that the curve is really of theMarkowitz (132) type discussedabove, instead of the Friedman-Savage type. The curve is not reallya curve of utility of money in general,but rather it is a curve of the utility-for-w-more dollars. Even so, it mustbe assumed further that as the totalamount of money possessed by thesubject changes during the experi-ment, the utility-for--more dollarscurve does not change. Mosteller andNogee argue, on the basis of detailedexamination of some of their data,that the amount of money possessedby the subjects did not seriouslyinfluence their choices. The utilitycurves they reported showed chang-ing marginal utility within theamounts of money usdd in their ex-

    periment. Consequently, their con-clusion that the amount of moneypossessed by the subjects was notseriously important can only be trueif their utility curves are utility-for-w-more dollars curves and if theshapes of such curves are not affectedby changes in the number of dollarson hand. This discussion exhibits atype of problem which must alwaysarise in utility measurement andwhich is new in psychological scaling.The effects of previous judgments onpresent judgments are a familiarstory in psychophysics, but they areusually assumed to be contaminatinginfluences that can be minimized oreliminated by proper experimentaldesign. In utility scaling, the funda-mental idea of a utility scale is suchthat the whole structure of a subject'schoices should be altered as a resultof each previous choice (if the choicesare real ones involving money gainaor losses). The Markowitz solutionto this problem is the most practicalone available at present, and thatsolution is not entirely satisfactorysince all it does is to assume thatpeople's utilities for money operatein such a way that the problem doesnot really exist. This assumption isplausible for money, but it getarapidly less plausible when othercommodities with a less continuouscharacter are considered instead.

    Probability preferences. In a seriesof recent experiments (55, 57, 58,59), the writer has shown that subjects,when they bet, prefer some probabil-ities to others (57), and that thesepreferences cannot be accounted forby utility considerations (59). Allthe experiments were basically of thesame design. Subjects were requiredto choose between pairs of bets ac-cording to the method of paired com-parisons. The bets were of threekinds: positive expected value, nega-

  • 396 WARD EDWARDS

    tive expected value, and zero ex-pected value. The two members ofeach pair of bets had the same ex-pected value, so that there was never(in the main experiment [57, 59]) anyobjective reason to expect that choos-ing one bet would be more desirablethan choosing the other.

    Subjects made their choices underthree conditions: just imagining theywere betting; betting for worthlesschips; and betting for real money.They paid any losses from their ownfunds, but they were run in extrasessions after the main experiment tobring their winnings up to $1.00 perhour.

    The results showed that two fac-tors were most important in deter-mining choices: general preferences ordislikes for risk-taking, and specificpreferences among probabilities. Anexample of the first kind of factor isthat subjects strongly preferred lowprobabilities of losing large amountsof money to high probabilities oflosing small amounts of moneytheyjust didn't like to lose. It also turnedout that on positive expected valuebets, they were more willing to acceptlong shots when playing for realmoney than when just imagining orplaying for worthless chips. An ex-ample of the second kind of factoris that they consistently preferredbets involving a 4/8 probability ofwinning to all others, and consistentlyavoided bets involving a 6/8 prob-ability of winning. These preferenceswere reversed for negative expectedvalue bets.

    These results were independent ofthe amounts of money involved inthe bets, so long as the condition ofconstant expected value was main-tained (59). When pairs of bets whichdiffered from one another in expectedvalue were used, the choices were acompromise between maximizing ex-

    pected amount of money and bettingat the preferred probabilities (58).An attempt was made to constructindividual utility curves adequate toaccount for the results of several sub-jects. For this purpose, the utility of$0.30 was defined as 30 utiles, and itwas assumed that subjects cannotdiscriminate utility differences small-er than half a utile. Under these as-sumptions, no individual utility curvesconsistent with the data could bedrawn. Various minor experimentsshowed that these results were relia-ble and not due to various possibleartifacts (59). No attempt was madeto generate a mathematical model ofprobability preferences.

    The existence of probability prefer-ences means that the simple vonNeumann-Morgenstern method ofutility measurement cannot succeed.Choices between bets will be deter-mined not only by the amounts ofmoney involved, but also by thepreferences the subjects have amongthe probabilities involved. Only anexperimental procedure which holdsone of these variables constant, orotherwise allows for it, can hope tomeasure the other. Thus my experi-ments cannot be regarded as a wayof measuring probability preferences;they show only that such preferencesexist.

    It may nevertheless be possible toget an interval scale of the utility ofmoney from gambling experiments bydesigning an experiment which meas-ures utility and probability prefer-ences simultaneously. Such experi-ments are likely to be complicatedand difficult to run, but they can bedesigned.

    Subjective probability. First, aclarification of terms is necessary.The phrase subjective probability hasbeen used in two ways: as a namefor a school of thought about the

  • THEORY OF DECISION MAKING 397

    logical basis of mathematical prob-ability (51, 52, 80) and as a name fora transformation on the scale ofmathematical probabilities which issomehow related to behavior. Onlythe latter usage is intended here. Theclearest distinction between thesetwo notions arises from considera-tion of what happens when an objec-tive probability can be denned (e.g.,in a game of craps). If the subjectiveprobability is assumed to be differentfrom the objective probability, thenthe concept is being used in its sec-ond, or psychological, sense. Otherterms with the same meaning havealso been used: personal probability,psychological probability, expecta-tion (a poor term because of thedanger of confusion with expectedvalue). (For a more elaboratetreatment of concepts in this area,see 192.)

    In 1948, prior to the Mosteller andNogee experiment, Preston andBaratta (149) used essentially similarlogic and a somewhat similar experi-ment to measure subjective prob-abilities instead of subjective values.They required subjects to bid com-petitively for the privilege of takinga bet. All bids were in play money,and the data consisted of the winningbids. If each winning bid can be con-sidered to represent a value of playmoney such that the winning bidderis indifferent between it and the bethe is bidding for, and if it is furtherassumed that utilities are identicalwith the money value of the playmoney and that all players have thesame subjective probabilities, thenthese data can be used to construct asubjective probability scale. Prestonand Baratta constructed such ascale. The subjects, according to thescale, overestimate low probabilitiesand underestimate high ones, with anindifference point (where subjective

    equals objective probability) at about0.2. Griffith (81) found somewhatsimilar results in an analysis ofparimutuel betting at race tracks, asdid Attneave (17) in a guessing game,and Sprowls (178) in an analysis ofvarious lotteries. The Mosteller andNogee data (142) can, of course, beanalyzed for subjective probabilitiesinstead of subjective values. Mostel-ler and Nogee performed such ananalysis and said that their resultswere in general agreement withPreston and Baratta's. However,Mosteller and Nogee found no in-difference point for their Harvardstudents, whereas the NationalGuardsmen had an indifference pointat about 0.5. They are not able toreconcile these differences in results.

    The notion of subjective probabil-ity has some serious logical difficulties.The scale of objective probability isbounded by 0 and 1. Should a sub-jective probability scale be similarlybounded, or not? If not, then manydifferent subjective probabilities willcorrespond to the objective proba-bilities 0 and 1 (unless some trans-formation is used so that 0 and 1 ob-jective probabilities correspond toinfinite subjective probabilities, whichseems unlikely). Considerations ofthe addition theorem to be discussedin a moment have occasionally ledpeople to think of a subjectiveprobability scale bounded at 0 butnot at 1. This is surely arbitrary.The concept of absolute certainty isneither more nor less indeterminatethan is the concept of absolute im-possibility.

    Even more drastic logical problemsarise in connection with the additiontheorem. If the objective probabilityof event A is P, and that of A notoccurring is Q, then P+Q=1. Shouldthis rule hold for subjective proba-bilities? Intuitively it seems neces-

  • 398 WARD EDWARDS

    sary that if we know the subjectiveprobability of A, we ought to be ableto figure out the subjective proba-bility of not-^4, and the only reason-able rule for figuring it out is sub-traction of the subjective probabilityof A from that of complete certainty.But the acceptance of this additiontheorem for subjective probabilitiesplus the idea of bounded subjectiveprobabilities means that the subjec-tive probability scale must be identi-cal with the objective probabilityscale. Only for a subjective proba-bility scale identical with the objec-tive probability scale will thesubjective probabilities of a collec-tion of events, one of which musthappen, add up to 1. In the specialcase where only two events, A andnot-A, are considered, a subjectiveprobability scale like SI or S2 inFig. 3 would meet the requirementsof additivity, and this fact has led tosome speculation about such scales,particularly about 51. But suchscales do not meet the additivity re-quirements when more than twoevents are considered.

    One way of avoiding these diffi-

    H_JCO