+ All Categories
Home > Documents > The asymmetry of judgemental confidence intervals in time series forecasting

The asymmetry of judgemental confidence intervals in time series forecasting

Date post: 16-Sep-2016
Category:
Upload: marcus-oconnor
View: 216 times
Download: 0 times
Share this document with a friend
11
International Journal of Forecasting 17 (2001) 623–633 www.elsevier.com / locate / ijforecast The asymmetry of judgemental confidence intervals in time series forecasting a, b c Marcus O’Connor *, William Remus , Kenneth Griggs a School of Information Systems, University of New South Wales, Sydney 2052, Australia b University of Hawaii, Hawaii, HI, USA c California Polytechnic, San Luis Obispo, CA, USA Abstract In forecasting a time series, one may be asked to communicate the likely distribution of the future actual value, often expressed as a confidence interval. Whilst the accuracy (calibration) of these intervals has dominated most studies to date, this paper is concerned with other possible characteristics of the intervals. It reports a study in which the prevalence and determinants of the symmetry of judgemental confidence intervals in time series forecasting was examined. Most prior work has assumed that this interval is symmetrically placed around the forecast. However, this study shows that people generally estimate asymmetric confidence intervals where the forecast is not the midpoint of the estimated interval. Many of these intervals are grossly asymmetric. Results indicate that the placement of the forecast in relation to the last actual value of a time series is a major determinant of the direction and size of the asymmetry. 2001 Elsevier Science B.V. All rights reserved. Keywords: Judgement; Confidence intervals; Time series; Calibration; Asymmetry 1. Introduction Later research tended to qualify those early conclusions and provided evidence that the task The confidence that one has in one’s judge- environment and the nature of the task dramati- ment or decision has been a popular topic of cally affected calibration of confidence esti- judgemental research. Early research concluded mates (see Ayton & McClelland, 1997). In time that ‘‘confidence in judgement is not to be series forecasting, most research has focused on trusted’’ (Sjoberg, 1982), mainly as a result of the calibration of confidence intervals (CIs) consistent findings that the calibration or ac- produced by statistical methods (Makridakis, curacy of confidence statements was rather low. Hibon, Lusk & Belhadjali, 1987) or judgemen- tal estimates (O’Connor & Lawrence, 1989). This paper is motivated by the finding that *Corresponding author. Tel.: 161-2-9385-4061; fax: accuracy (calibration) is not always the sole 161-2-9662-4640. consideration used by people when making E-mail address: [email protected] (M. O’Con- nor). forecasting judgements (Bretschneider, Gorr, 0169-2070 / 01 / $ – see front matter 2001 Elsevier Science B.V. All rights reserved. PII: S0169-2070(01)00103-0
Transcript

International Journal of Forecasting 17 (2001) 623–633www.elsevier.com/ locate / ijforecast

The asymmetry of judgemental confidence intervals in time seriesforecasting

a , b cMarcus O’Connor *, William Remus , Kenneth GriggsaSchool of Information Systems, University of New South Wales, Sydney 2052, Australia

bUniversity of Hawaii, Hawaii, HI, USAcCalifornia Polytechnic, San Luis Obispo, CA, USA

Abstract

In forecasting a time series, one may be asked to communicate the likely distribution of the future actual value, oftenexpressed as a confidence interval. Whilst the accuracy (calibration) of these intervals has dominated most studies to date,this paper is concerned with other possible characteristics of the intervals. It reports a study in which the prevalence anddeterminants of the symmetry of judgemental confidence intervals in time series forecasting was examined. Most prior workhas assumed that this interval is symmetrically placed around the forecast. However, this study shows that people generallyestimate asymmetric confidence intervals where the forecast is not the midpoint of the estimated interval. Many of theseintervals are grossly asymmetric. Results indicate that the placement of the forecast in relation to the last actual value of atime series is a major determinant of the direction and size of the asymmetry. 2001 Elsevier Science B.V. All rightsreserved.

Keywords: Judgement; Confidence intervals; Time series; Calibration; Asymmetry

1. Introduction Later research tended to qualify those earlyconclusions and provided evidence that the task

The confidence that one has in one’s judge- environment and the nature of the task dramati-ment or decision has been a popular topic of cally affected calibration of confidence esti-judgemental research. Early research concluded mates (see Ayton & McClelland, 1997). In timethat ‘‘confidence in judgement is not to be series forecasting, most research has focused ontrusted’’ (Sjoberg, 1982), mainly as a result of the calibration of confidence intervals (CIs)consistent findings that the calibration or ac- produced by statistical methods (Makridakis,curacy of confidence statements was rather low. Hibon, Lusk & Belhadjali, 1987) or judgemen-

tal estimates (O’Connor & Lawrence, 1989).This paper is motivated by the finding that

*Corresponding author. Tel.: 161-2-9385-4061; fax:accuracy (calibration) is not always the sole161-2-9662-4640.consideration used by people when makingE-mail address: [email protected] (M. O’Con-

nor). forecasting judgements (Bretschneider, Gorr,

0169-2070/01/$ – see front matter 2001 Elsevier Science B.V. All rights reserved.PI I : S0169-2070( 01 )00103-0

624 M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633

Grizzle & Klay, 1989). In the same way as focussed on their accuracy and calibration. Thisforecasters deliberately bias their estimates to research found that people are typically over-incorporate exogenous considerations (Law- confident in their judgements (Oskamp, 1965;rence, O’Connor & Edmundson, 2000; Good- Lichtenstein, Fischhoff & Phillips, 1982). Theywin, 1996), it is possible that people may be were most commonly overconfident when theconcerned with aspects other than accuracy forecast task was considered to be difficult, butwhen setting their CIs. Unlike the statistical relatively underconfident when the task wasmethods, which most commonly assume that the easy. More recent research suggested that peo-forecast is equidistant between the upper and ple are actually well adapted to their naturallower CI limits (cf. Taylor & Bunn, 1999), environments and only appear to be overconfi-judgemental CIs (JCIs) may be deliberately set dent (Gigerenzer, Hoffrage & Kleinbolting,in such a way as to convey information about 1991; Juslin, 1994; Klayman, Soll, Gonzalez-the likely behaviour of the time series. For Vallejo & Barlos, 1999). Ayton and McClellandexample, suppose you are forecasting the likely (1997) conclude in their review of this literaturesales of a product that may be coming to the end that the nature of the task environment andof its product life cycle. To date sales have been actual tasks required of the people will dramati-increasing, but recent movements have indicated cally affect the interpretation of the results.that the upward momentum may be waning. A Erev, Wallsten and Budescu (1994) also identifyjudgemental forecaster may, in this circum- sources of bias in the interpretation or under-stance, decide to use the forecast to merely standing of overconfidence. Furthermore, Yanivextrapolate the historical upward trend, but use and Foster (1997) also demonstrate how JCIsthe CI estimation process to signal to the user of may become badly calibrated in a desire tothe forecast that there is a greater likelihood that increase the informativeness (if not the accura-the slope of the time series trend line will turn cy) of the estimates.down than go up. In this circumstance the For the studies that have used a time seriesjudgmental forecaster may set a CI such that the

task, the results have been confirmatory of thosedistance between the lower limit and the fore-

from other task domains (O’Connor & Law-cast is greater than the distance between the

rence, 1989). Typically, the judgmental confi-upper limit and the forecast. The judgementaldence intervals (JCIs) are too narrow (indicatingCI can then be said to be asymmetrically placedoverconfidence), but they are too wide for lowaround the forecast.noise series (i.e. series that are easy to forecast).This paper examines, in a laboratory setting,In addition, the JCIs have been found to bethe asymmetry of the confidence estimates andunduly affected by extraneous and irrelevantthe reasons for its existence that relate to thefactors such as the level of seasonality in thetime series under consideration. It shows thattime series (O’Connor & Lawrence, 1992) andthere is a very strong tendency for people tothe scale at which the series is presentedestimate asymmetrical confidence intervals and(Lawrence & O’Connor, 1993). The influencesuggests some of the determinants that relate toof extraneous factors has also been observed inthe time series under consideration.the estimation of the forecast. In time seriesforecasting (working with a graph of the data)they have been shown to be unduly affected by2. Backgroundthe location of the last actual value in relation to

Early research into the adequacy of the rest of the series (the last segment slope),judgemental approaches to confidence intervals i.e. they tend to over-react to new time series

M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633 625

information (O’Connor, Remus & Griggs, bias in forecasts is contained in a comprehen-1993). Taken together, these studies suggest that sive field study of sales forecasting in Aus-idiosyncrasies within the time series may undu- tralian business (Lawrence et al., 2000) and inly influence the symmetry of the JCIs. the work of Mathews and Diamantopoulous

In the light of the evidence presented above (1990), Egginton (1999) and Bretschneider etthat the calibration (accuracy) of the JCIs is al. (1989). In recognising this source of bias,questionable, why should any possible Goodwin (1996, 2000) distinguishes between aasymmetry be considered? As Yaniv and Foster forecast and a decision (the latter being a(1997) argue, people may sacrifice accuracy for forecast that has been adjusted for the lossinformativeness in setting their JCIs. In this function). He then presents a statistical mecha-case, it is possible that people, for example, use nism for adjusting the decision to arrive at athe JCI to convey information about a possible ‘forecast’. Whilst this source of asymmetry hasfuture change in trend, possibly at some cost to the potential to be a major reason forthe accuracy of it. For example, if a person asymmetry in JCIs, we do not address this issuebelieved that there was a greater probability that in this paper.a time series would turn down than continue on The second source of asymmetry in JCIsits current trend or go up, they may create a JCI relates to the characteristics of the time seriesthat is downwardly biased where there is greater itself. This second source is the focus of thisprobability that the series would turn down than paper. Using standard statistical approaches, theotherwise. Thus, an estimate of a JCI that tasks of forecasting and CI estimation areexhibits asymmetry may be an illustration of the inextricably interwoven. A forecast is derivedinformativeness of it. by applying a model that minimises past fore-

Direct evidence of any asymmetry of JCIs is cast errors and the distribution of these modelsparse. One study that contains some evidence fitting errors is then used to derive the CI.is Spence (1996). In this study experts and Typically, as with the Makridakis et al. (1987)novices were required to provide a fair market study, a parametric assumption is made aboutvalue for a residential property, together with the nature of the error distribution (e.g., aoptimistic and pessimistic estimates. Both ex- normal distribution), often with the implicationperts and novices produced asymmetric JCIs, that there is an equal chance of the actual valuealthough no analysis of the reasons for this falling either above or below the forecast.asymmetry was reported. Interestingly, the JCIs Williams and Goodman (1971) and O’Connorof expert real estate valuers were more and Lawrence (1989) are among the few whoasymmetric than the novices and they tended to have sought to create CIs based on the empiricalbe more negatively biased. distribution of forecast errors. However, unlike

There seem to be two major sources or the statistical approaches, the tasks of estimatingcircumstances giving rise to asymmetric JCIs. the JCI and the judgemental forecast are not asFirst, the contextual environment or non time closely intertwined. Intuitions about the likelyseries information may induce asymmetry. In behaviour of the time series can be incorporatedthe context rich environment of practical fore- into either the forecast or the JCI or both.casting and planning, a forecast may not be the Furthermore, these intuitions are often onlysame as the ‘most likely’ value. The placement loosely linked to the past behaviour of theof the forecast and the JCI may itself be a series. For example, consider a time series thatfunction of the loss function faced by the shows an unambiguous simple trend upwards.company. Strong evidence for this source of Past research has suggested a strong tendency

626 M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633

for people to dampen such a trend (Lawrence & (Lawrence & O’Connor, 1992; Bolger & Har-Makridakis, 1989; O’Connor et al., 1993; vey, 1993) suggests that the forecast will beO’Connor, Remus & Griggs, 1997; Bolger & primarily determined by the long term averageHarvey, 1993). The expectation is that the of the series and therefore unbiased. Thus, thereupward trend will not continue indefinitely. In is unlikely to be any bias in the forecasts. In thethese previous studies, no opportunity was given light of this, one would expect that the JCIto people to both estimate a forecast and a JCI. would be symmetric around the forecast. In thisWith the opportunity of an alternate medium to case, there is no reason to believe that sub-express the risk of the series departing from its sequent actual values will be anything otherupward path, one can envisage that people may than symmetrically distributed around the fore-choose to express this through the placement of cast.the JCI, rather than in the forecast. Given this However, for trended series, there are severalscenario, the forecast may not be dampened but competing hypotheses about the interaction ofthe JCI may be downwardly biased, with a the placement of the forecast and the estimationgreater probability that the actual value will fall of the JCI. For the ‘Forecast bias’ hypothesisbelow the forecast than above it. In this way, (H ), the tendency to dampen forecasts is1

the provision of two vehicles for expression of presumed to lead to symmetric JCIs. In otherjudgemental estimates may allow a JCI to be words, the desire to indicate a tendency for theasymmetric. series to discontinue its current pattern is ex-

In summary, this paper examines JCIs in the pressed in a damped forecast primarily, with alight of their asymmetry around the forecast. It symmetric JCI placed around the dampenedconsiders them under stable time series con- forecast. Alternatively, for the ‘Asymmetricditions and also when the time series changes. It JCI’ hypothesis (H ) the opportunity to express2

also examines them under different quantum the full probability distribution leads to anlevels of forecast error (noise levels). It also unbiased forecast (i.e. without dampening) andconsiders the effect of time series direction on an asymmetric JCI. Of course, it is also possiblethe asymmetry of the CIs — since series that the forecast will be biased (dampened) anddirection has been shown to influence the extent the JCI will also be asymmetric (H ).3

to which people dampen their forecasts (Law- In the light of the lack of empirical evidencerence & Makridakis, 1989) and hence may on asymmetry of JCIs, we do not have a priorinfluence the shape of the CIs. expectation as to which hypothesis will be

supported.

3. Research hypotheses4. Research design

As asymmetry is a function of the placementof the forecast within the JCI, our research To investigate the existence of anyexpectations focus on the individual placement asymmetry in the JCIs and its relationship to theof the forecast and the JCI, as well as the joint bias of the forecast, a 3 3 3 factorial laboratoryplacement of the forecast in relation to the JCI. based experiment was designed. The two in-

One of the major determinants of the place- dependent factors were the slope of the timement of the forecast in a time series forecasting series (none, up and down) and the noise in thetask will be the slope or trend in the time series. time series (low, medium and high noise). TheFor an untrended (flat) time series, evidence former was designed to test the influence of

M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633 627

slope on the forecast and the JCI asymmetry. jects could ascertain how many of those fourThe latter was designed to examine the in- values occurred inside or outside their lastfluence of the forecast difficulty of the series on estimated JCI. Second, since each trial involvedthe asymmetry, since this has been a consistent three estimation tasks — the forecast, the upperfinding in the calibration literature (see Ayton & and the lower limits — it was decided to restrictMcClelland, 1997). the workload for each subject to prevent any

possible deterioration in attention to the task.The randomness added to the series was4.1. The time series

taken from a normal distribution of errors.Nine time series were artificially generated — Three variance levels were used to reflect an

three trend levels 3 three noise levels. Each expected value of Absolute Forecast Errors oftime series where a trend was present started out 2.5, 5, and 10%, representative of error rates inflat and, at a certain point, the trend was sales forecasting (Dalrymple, 1987). As men-introduced. Thus, the trended time series con- tioned, the three levels of variance (2.5, 5, andtained discontinuities to reflect their existence in 10%) were crossed with the three discontinuitypractice (Collopy & Armstrong, 1992). Each directions (Flat, Down, and Up) to create ninetime series was divided into three contiguous different series.segments:

Segment 0 (periods 1 to 20): This part was 20 4.2. The details of the data gatheringperiods of historical data generated with a base proceduresof 100 and error added. These data were dis-played to the subjects so they could assess the The subjects were trained in the use of theinitial characteristics of the time series. software immediately before beginning the ex-

Segment 1 (periods 21 to 28): This part was a periment. In this training session, the subjectscontinuation of the series as displayed for the were told that some of the nine series that theyfirst 20 points (segment 0). The subjects now would forecast might at some point begin tomade a one period ahead forecast and a JCI grow or decline. They were also told that theyevery fourth period from 21 to 28 (i.e., periods would estimate both point forecasts and 95%20, 24 and 28). In this way, they became confidence intervals every fourth period. Noaccustomed to forecasting the series and setting context was given for the nine time series to bethe JCI. forecast (e.g., forecasting sales, stock prices, or

Segment 2 (periods 29 to 48): At period 29, demand) to avoid having the subjects bring theirthe underlying series either grew by 2 units per preconceptions into the forecasting process. Theperiod or declined at 2 units per period. In subjects then forecasted a practice series toaddition, we included a control series where the finish the training session. Having completedtrend did not change. In all cases, error was the practice series, a sequence of nine timeadded to the underlying pattern. As with seg- series were presented to each subject in ament 1, subjects made a one period ahead unique random order. Following the judgmentalforecast and estimated a JCI every fourth period estimates for the current period, the actual value(i.e., periods 32, 36, 40, 44, and 48). for the current period and those for the next

It was decided to restrict the forecasts and three periods were displayed. The forecast wasJCIs to every fourth period for two main estimated first, then the subjects provided thereasons. First, with the revelation of the four upper 95% limit and then the 95% lower limit.subsequent actual values after each trial, sub- This sequence continued until the end of the

628 M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633

series. Thus, there were eight forecasts and JCIs only manipulated in segment 2, analysis inestimated (periods 20, 24, 28, 32, 36, 40, 44 and made only for that segment.48) — three in segment 1 and five in segment 2.

The 52 subjects for the experiment wereundergraduate students at the University of 5. Analysis and resultsHawaii. They were recruited from an OperationsResearch course which covered time series Table 1 presents the mean forecast bias andforecasting in the 2 weeks prior to the experi- asymmetry of the JCIs (ASR) for the secondment. The subjects received course credit for segment, where trend was introduced.participating. The average time taken to com- Table 1 shows that, overall, the forecastsplete all the tasks was less than 1 h. were biased and significantly influenced by the

trend in the series (F(2,342) 5 48.0, P , 0.001).As expected, the forecasts for upward series4.3. Analysis methodologywere biased downward; and the forecasts fordownward series were biased upwards. Thus,This study is concerned with an analysis ofresults from a number of previous studies werethe bias of the forecast and how it relates to anyconfirmed. Interestingly, the forecasts for thebias or asymmetry in the JCI. The bias in theflat series were also biased upward. There doesforecasts was measured bynot seem to be an obvious explanation for this

Bias 5 actual 2 forecast. result, although subjects may have presumed theseries represented sales, which are typicallythought to rise.The symmetry of the JCI was measured by

The JCIs, on the other hand, were also biasedcomputing the ratio of the distance between the(asymmetric). Single sample t-tests on individ-lower confidence interval limit and the forecastual ASRs revealed significant asymmetryto the full confidence interval width (termed the(t(2174) 5 9.45, P , 0.0001). Table 1 suggestsasymmetry ratio (ASR)), viz.(and repeated measures ANOVA confirms) thatthere was a significant effect of trend on ASRASR 5 100*(Forecast 2 Lower Limit) /(F(2,342) 5 4.6, P , 0.01). No effect was found

Confidence Interval Width. for the level of noise in the series on either biasor ASR, and thus will be not considered further.

Thus, our results confirm that there was a biasThus, when the confidence interval isin the forecasts and that this was influenced bysymmetric about the forecast the ASR should be

around 50. When the lower limit is further awayfrom the forecast than the upper limit, the ASRwill be greater than 50 (hereafter termed ‘down-

Table 1ward biased’) and when the upper limit isForecast bias and asymmetry ratios (ASR) for segment 2

further away from the forecast than the lowerBias ASRlimit, the ratio will be less than 50 (hereafter

termed ‘upward biased’). Since a number of Flat 22.59 53.1Up 12.64 52.7forecasts and JCIs were estimated for each timeDown 24.54 56.3series, repeated measures ANOVA was used to

investigate the effects of trend and noise in the Where bias5actual2forecast. ASR5100*(forecast2lower limit) /CI width.factorial design. Given the fact that trend was

M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633 629

the trend in the series. But the ASR also showed pated for the trended series. However, the JCIssignificant asymmetry and was influenced by were also significantly biased. Furthermore,the trend in the series. Thus, of the competing they were influenced by the trend in the series.hypotheses presented earlier, H appears most However, these analyses do not indicate the3

appropriate. extent of the bias and asymmetry. Fig. 1 pre-sents a histogram of the ASR for the secondsegment, where trend was manipulated.

A close inspection of the distribution of ASR6. Discussionrevealed that a rather large variation in ASR

Results presented in Table 2 reveal that the occurred. As Fig. 1 shows for segment 2, aboutforecasts were biased in the direction antici- 25% of the ASR observations were less than 40;

and about 35% were greater than 60. Whenexamined at the level of individual subjects, we

Table 2 noted all subjects sometimes estimatedRegression statistics for ASR regressed against forecastasymmetric CIs with an ASR less than 30 and atslope and segment slope, for different trend categoriesother times greater than 70. Most people esti-2Series Standardized regression coefficients R mated CIs that contained a significant number

a bdirection Forecast slope Segment slope of ASRs on either side of 50. The mean of thestandard deviations of the ASR for each subjectFlat 0.59 20.21 0.29

Up 0.53 20.12 0.28 was 17.1. If subjects were consistently estimat-Down 0.47 20.13 0.20 ing JCIs that were of the same bias there would

a be little variation in ASRs (i.e. reflected in aForecast slope 5 forecast 2 actual .t11 tb Segment slope 5 actual 2 actual . small standard deviation), regardless of whethert t21

Fig. 1. Histogram of the distributions of the ASR for segment 2.

630 M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633

they were upwardly or downwardly biased. The actual: and where the last segment slope waslarge variation in ASR estimated by each sub- down, there was a roughly equal tendency toject suggests there was no a priori mindset that place the forecast above or below the last actual.was fixed on a standard ‘shape’ for the JCI. This For up trending series a similar tendencyconsiderable variation suggests that people may occurred. For down series, where the last seg-have been responding to the data in the series ment slope was down, there was a strongwhen estimating their JCIs and this accounted tendency to place the forecast above the lastfor the variation in the ASR. actual: and where the last slope was up, there

The results in Table 1 also suggest that the was an equal tendency for up and down fore-asymmetry of the JCIs may be a function of the casts. In the light of these tendencies for theplacement of the forecast in relation to the forecast placement, what is their effect on theunderlying trend of the series. It suggests that ASR? To determine this effect, the magnitudethere may be an association between the bias in of the last segment slope in the time series (i.e.the forecast and the bias or asymmetry in the actual 2 actual ) and the magnitude of thet t21

JCI. In other words, the bias in the placement of extent to which the forecast was above or belowthe forecast (the first task for the subject) may the last actual value of the series (i.e.also have had an impact on the bias in the JCI forecast 2 actual ) were regressed againstt11 t

(the second task for the subject). Close inspec- ASR . Table 2 provides the standardisedt11

tion of Table 1 indicates that such an associa- regression coefficients. The table shows that thetion may exist. For example, for downward major factor influencing the size of the ASRseries, the forecasts are biased upwards and the was the difference between the forecast and theJCIs biased downwards: and for upward series, last actual value, the forecast slope. The greaterthe forecasts are biased downwards and (com- the extent to which the forecast was placedparatively) the JCIs are asymmetrically biased above (or below) the last actual, the greater theupwards. This suggests that the placement of the extent to which the ASR was downwardly (orforecast and the JCI must be considered to- upwardly) biased. As Table 2 shows, this resultgether. The determinants of the placement of the was independent of the overall direction of theforecast may, thus, provide some insight into time series.the placement of the JCI around the forecast. Thus, the last observation, the forecast and

Obviously, a major determinant of the place- the JCIs must be considered together to under-ment of the forecast must be the trend in the stand the asymmetry of the JCIs. It hints that theseries. But, several studies have shown that deliberate estimation of the asymmetric JCIsanother major determinant of the placement of may be an attempt by the subjects to conveythe forecast is the immediate last movement in information, albeit at a possible cost to theirthe series and the last actual value (Lawrence & accuracy.O’Connor, 1992; Bolger & Harvey, 1993). So, there seem to be two strategies employedThus, analysis of asymmetry of the JCI needs to by people — to place the forecast in onebe undertaken not only in relation to the trend, direction and to bias the JCI in the oppositebut also in relation to the last segment slope direction. The question remains — which of the(actual 2 actual ) and the placement of the two strategies employed is more correct?; i.e.t t21

forecast in relation to the last actual (i.e. does the actual value move more in the direc-forecast 2 actual ). For flat series, where the tion of the forecast or towards the (opposite)t11 t

last segment slope was up, there was a strong bias of the JCI? Table 3 classifies the number oftendency to place the forecast below the last times the actual was above or below the previ-

M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633 631

Table 3 JCIs. Very little attention has been directed atThe movement of the actual value in relation to the their asymmetric properties. But, as Yaniv andforecast movement Foster (1997) demonstrate, people seem to be

Actual up Actual down prepared to sacrifice accuracy in an attempt toprovide information to the recipients of theForecast up 472 404

Forecast down 381 696 estimates. In this paper we have shown thatthere is a very strong tendency to vary theasymmetry in the light of the forecast that was

ous actual in relation to the forecast movement. made prior to the JCI. In many ways, it can beIt suggests that, if the forecast is placed above likened to a ‘hedging’ strategy — where people(below) the last actual, the CI will be biased place the forecast in one direction and bias thedownwards (upwards) — but the actual tends to JCI in the opposite direction. Of course, thesefollow the direction of the forecast. Thus, not conclusions and inferences are based on ASRonly will the CIs be badly calibrated, they are statistics. Conclusive evidence of such a deliber-biased in the opposite direction to what is ate policy by people to behave in this way couldrequired. only be gained from some protocol or debriefing

As closing caveats, we wish to note that the analysis with the subjects as they proceededgeneralizability of the current study is limited with the task. Nevertheless, these results strong-by the laboratory setting used. We attempted, ly suggest that there may be a nexus betweenhowever, to reduce the threat to external validity the placement of the forecast and the asymmetryby designing the experimental setting to capture of the JCIs. Whilst the calibration of the JCIs isrelevant aspects of the forecasting task without important, there is an additional aspect that ofintroducing artifacts or having technology get in the JCI that needs to be recognised, its in-the way of the forecasting process. In addition, formation content. Perfect calibration isn’t ev-we note that Spence (1996) found that the JCIs erything.of expert real estate valuers were grosslyasymmetric. The conclusion that the JCI bias isdependent on the placement of the forecast may

Referencesalso depend on the sequence in which the tasksare done. In this study, people were asked to

Ayton, P., & McClelland, A. (1997). How real is over-first make their forecast and then their JCI. A confidence? Journal of Behavioral Decision Makingreversal of this process may produce different 10(3), 279–285.

Bolger, F., & Harvey, N. (1993). Context-sensitive heuris-results. Further, as argued at the start of thistics in statistical reasoning. Quarterly Journal of Ex-paper, contextual effects may greatly influenceperimental Psychology 46A(2), 779–811.any asymmetry and bias. Research into these

Bretschneider, S., Gorr, W., Grizzle, G., & Klay, E. (1989).contextual effects may warrant further research. Political and organisational influences on forecast ac-

curacy: forecasting state sales tax receipts. InternationalJournal of Forecasting 5, 307–319.

Collopy, F., & Armstrong, J. S. (1992). Expert opinions7. Conclusionsabout extrapolations and the mystery of the overlookeddiscontinuities. International Journal of Forecasting 8,This study has shown how the JCIs in time575–582.

series forecasting are often very asymmetric. Up Dalrymple, D. J. (1987). Sales forecasting practices:to this point, most researchers have been results of a United States survey. International Journalconcerned with the accuracy (calibration) of the of Forecasting 3, 379–391.

632 M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633

Egginton, D. (1999). Testing the efficiency and rationality O’Connor, M. J., & Lawrence, M. J. (1989). An examina-of City forecasts. International Journal of Forecasting tion of the accuracy of judgmental confidence intervals15, 57–66. in time series forecasting. International Journal of

Erev, I., Wallsten, T., & Budescu, D. (1994). Simultaneous Forecasting 8, 114–155.overconfidence and underconfidence: the role of error in O’Connor, M., & Lawrence, M. J. (1992). Times seriesjudgment processes. Psychological Review 101, 519– characteristics and the widths of judgemental confidence527. intervals. International Journal of Forecasting 7, 413–

Gigerenzer, G., Hoffrage, U., & Kleinbolting, H. (1991). 420.Probabilistic mental models: a Brunswikian theory of O’Connor, M. J., Remus, W. E., & Griggs, K. (1993).confidence. Psychological Review 98, 506–528. Judgmental forecasting in times of change. International

Goodwin, P. (1996). Statistical correction of judgemental Journal of Forecasting 9, 163–172.point forecasts and decisions. Omega, International O’Connor, M. J., Remus, W. E., & Griggs, K. (1997).Journal of Management Science 24(5), 551–559. Going up–going down: how good are people at fore-

Goodwin, P. (2000). Correct or combine: mechanically casting trends and changes in trends? Journal of Fore-integrating judgemental forecasts with judgemental fore- casting 16, 165–176.casts. International Journal of Forecasting 16, 261– Oskamp, S. (1965). Overconfidence in case study judg-275. ments. Journal of Consulting Psychology 29, 261–265.

Juslin, P. (1994). The overconfidence phenomenon as a Sjoberg, L. (1982). Aided and unaided decision making:consequence of informal experimenter-guided selection improving intuitive judgment. Journal of Forecasting 1,of almanac items. Organizational Behavior and Human 349–363.Decision Processes 57, 226–246.

Spence, M. T. (1996). Problem–problem solver charac-Klayman, J., Soll, J., Gonzalez-Vallejo, C., & Barlos, S.

teristics affecting the calibration of judgements. Organi-(1999). Overconfidence: it all depends on how, what

zational Behavior and Human Decision Processesand whom you ask. Organizational Behavior and

67(3), 271–279.Human Decision Processes 79, 216–247.

Taylor, J., & Bunn, D. (1999). Investigating improvementsLawrence, M. J., & Makridakis, S. (1989). Factors affect-in the accuracy of prediction intervals for combinationsing judgmental forecasts and confidence intervals. Or-of forecasts: a simulation study. International Journal

ganizational Behavior and Human Decision Processesof Forecasting 15, 325–339.43, 172–187.

Williams, W., & Goodman, M. (1971). A simple methodLawrence, M. J., & O’Connor, M. (1992). Exploringfor the construction of empirical confidence limits forJudgemental Forecasting. International Journal of Fore-economic forecasts. Journal of the American Statisticalcasting 8, 15–26.Association 66, 752–754.Lawrence, M. J., & O’Connor, M. (1993). Scale, vari-

Yaniv, I., & Foster, D. P. (1997). Precision and accuracy ofability and the calibration of judgmental predictionjudgemental estimation. Journal of Behavioral Decisionintervals. Organizational Behavior and Human DecisionMaking 10(1), 21–32.Processes 56, 441–458.

Lawrence, M., O’Connor, M., & Edmundson, R. (2000). ABiographies: Marcus O’CONNOR is a Professor offield study of sales forecasting accuracy and processes.Information Systems at the University of new South Wales,European Journal of Operations Research 122, 151–Sydney, Australia. His research interests focus on the way160.information is perceived and integrated in the forecastingLichtenstein, S., Fischhoff, B., & Phillips, L. (1982).task and the ability of people to effectively engage in aCalibration of probabilities: state of the art to 1980. In:variety of forecasting tasks.Kahneman, D., Slovic, P., & Tversky, A. (Eds.), Judg-

ment under uncertainty: heuristics and biases, Cam-bridge University Press, Cambridge, pp. 306–334. William REMUS is a Professor of Decision Sciences at the

Makridakis, S., Hibon, M., Lusk, E., & Belhadjali, M. University of Hawaii. His main areas of research interest(1987). Confidence intervals. International Journal of are human judgement (particularly forecasting), artificialForecasting 3, 489–508. intelligence (particularly neural networks) and issues in

Mathews, B., & Diamantopoulous, A. (1990). Judgemental research methodology. Overall he has published over 100revision of sales forecasts: effectiveness of forecast scientific papers in journals such as Management Science,selection. Journal of Forecasting 9, 407–415. Management Information Systems Quarterly, Organiza-

O’Connor, M. (1989). Models of human behaviour and tional Behavior and Human Decision Processes and theconfidence in judgement: a review. International Jour- International Journal of Forecasting.nal of Forecasting 5, 159–169.

M. O’Connor et al. / International Journal of Forecasting 17 (2001) 623 –633 633

Kenneth GRIGGS is an Associate Professor of Information as well as degrees from McGill University, Boston Uni-Systems at California Polytechnic University in San Luis versity, and the University of Maryland. He has publishedObispo. His research interests are in the areas of electronic in the International Journal of Forecasting, Organization-commerce, systems analysis, and object-oriented lan- al Behavior and Human Decision Processes, and Theguages. He holds a Ph.D. from the University of Arizona, Journal of Organizational Computing.


Recommended