EXPECTED VALUE OF INFORMATION Omkar Apahle. INFORMATION A B C.

Post on 18-Dec-2015

213 views 0 download

Tags:

transcript

EXPECTED VALUE OF INFORMATION

Omkar Apahle

INFORMATION

A B

A B

A B

C

EXPECTED VALUE OF INFORMATION (EVI)

• EVI is required to negate the effects of ,- overconfidence, - underestimation of risk and - surprise• EVI is required often in,- Risk Analysis- Sensitivity Analysis- Decision problem

DEFINITION

• Expected Value of Information (EVI) is the

integral over all possible posterior

distributions of the opportunity loss

prevented by improved information, weighted

by probability of that information.

CLASSIFICATION

• Example, • Weather condition and Camping activity

• EVPI = Highest price the decision maker is willing to pay for being able to know “Weather Condition” before making camping decision.

• EVII = Highest price the decision maker is willing to pay for being able to know “Weather Forecast” before making camping decision.

CHARECTERISTICS OF EVI

• Expected Value of Information (EVI) can never be less than zero.

• No other information gathering / sharing activities can be more valuable than that quantified by value of perfect information.

SOURCES

• Sample data

• Expert judgments

EVI & BAYES RULE

• Bayesian analysis relies on both sample information and prior information about uncertain prospects.

• Bayesian analysis provides a formal representation of human learning . An individual would update his / her “subjective beliefs” after receiving new information.

EVI & Bayes rule continued ….

• Investment in stock market

• Expert will provide perfect information

• Perfect information = always correct

• P ( Expert says “Market Up” Market Really Goes Up ) = 1

• Applying Bayes’ theorem

P ( Market Up Exp Says “Up” ) =

P ( Exp “Up” Market Up ) P (Market Up )

P ( Exp “Up” Market Up ) P (Market Up ) +P ( Exp “Up” Market Down) P (Market Down)

EVI & Bayes rule continued ….

EVI & PRIOR DISTRIBUTION • EVI depends on the prior distribution used to

represent current information.

• Subject experts and lay people often produce distributions that are far too tight.

• New or more precise measurements are often found to be outside the reported error bars of old measurements.

• Expected posterior probability before the results are known is exactly the prior probability content of that region.

• Expected value of posterior mean is equal to the prior mean.

• Prior variance = Posterior variance + Variance of posterior mean

UNCERTAINTY

• A random variable y is more uncertain than another random variable z if

- y = z + random noise- Every risk averter prefers a gamble with

payoffs equal to z to one with payoffs equal to y

- The density of y can be obtained from density of z by shifting weight to the tails through a series of mean-preserving spreads.

Uncertainty continued…..

• The decision as to whether to include uncertainty or not purely depends on the decision maker

• Expected Value of Including Uncertainty (EVIU)

• Expected Value of Ignoring (Excluding) Uncertainty (EVEU)

NOTATION• d ∈ D is a decision chosen from space D• x ∈ X is an uncertain variable in space X• L (d, x) is the loss function of d and x• f (x) is prior subjective probability density

on x• xiu = E (x) is the value of x when ignored

uncertainty• E [ L (d , x) ] = ∫x L (d , x) f (x) dx

is the prior expectation over x loss for d

Notation continued …

• Bayes’ decisiondy = Min-1d E [L(d , x)]

• Deterministic optimum decision ignoring uncertainty diu = Min-1d L(d , xiu)

EVIU

• Expected Value of Including Uncertainty (EVIU) is the expectation of the difference in loss between an optimal decision ignoring uncertainty and Bayes’ decision.

EVIU = E [ L (diu , x) ] - E [ L (dy , x) ]

EVPI

• Expected Value of Perfect information (EVIU) is the expectation of the difference in loss between an Bayes’ decision and the decision made after the uncertainty is removed by obtaining perfect information on x.

EVPI = E [ L (dy , x) ] - E [ L (dpi (x), x) ]

SOCRATIC RATIO

• Dimensionless index of relative severity of EVIU and EVPI

EVIUS iu =

EVPI

LOSS FUNCTIONS

• When ignoring uncertainty doesn’t matter ?

• Classes of common loss functions,1.Linear 2.Quadratic3.Cubic

LINEAR LOSS FUNCTION

• Assume, x iu = x

• d ∈ { d1, d2,….dn }

• Loss function,a1 + b1x if d = d1

• L (d, x) = a2 + b2x if d = d2

… ….an+ bnx if d = dn

Linear loss function continued….

• E [ L (d , x)] = L (d , x ) = L ( d , xiu )

• Bayes decision,dy = Mind

-1 E [ L (d , x)]

= Mind-1 L ( d , xiu ) = diu

• EVIU = E [ L (diu , x)] - E [ L (dy , x)] = 0

• Considering uncertainty makes no difference to decision and hence to the outcome.

QUADRATIC LOSS FUNCTION

• Let the loss function be,L (d , x) = k ( d – x ) 2

• E [L (d , x) ] = k ( d 2 – 2d E(x) + x 2 )• On derivation we get,

2d – 2E(x) = 0dy = E(x) = diu

• Uncertainty makes no difference in decision

CUBIC ERROR LOSS FUNCTION• Decisions involving uncertain future demand• L (d , x) = r ( d – x ) 2 + s ( d – x ) 3 r ,s > 0• Henrion (1989) showed that,

EVIU 1• Siu = <

EVPI 3

• Obtaining better information about x is better than including uncertainty ( in all cases ).

BILINEAR LOSS FUNCTION

• Newsboy problem• How many newspapers to order ?• d = newspaper to be ordered

x = uncertain demanda = $ loses if ordered too manyb = $ forgoes if ordered too few

Newsboy problem continued ….

• Loss function, a ( d – x ) if d > x

L (d , x) = where a, b > 0 b ( x – d ) if d < x

Newsboy problem continued ….

• Uniform prior on x, with mean x and width w

1 / w if x - w/2 < x < x + w/2f(x) = w > 0

0 otherwise

• d iu = x

Newsboy problem continued ….

Paper Demanded (x)

Probability density:

f (x)

xd

w

Newsboy problem continued ….

b ( x – d ) a ( d – x )

Loss

Too few Too many

0

Error (excess newspapers) = ( d – x )

• Results

• EVPI =

• EVIU =

• Siu = EVIU / EVPI = ( a – b ) 2 / 4 a b

Newsboy problem continued ….

w a b / 2 (a + b)

w ( a – b ) / 8 (a + b)

Newsboy problem continued ….

• Socratic ratio is independent of the uncertainty

• EVIU does not increase with uncertainty value relative to the EVPI

• Considering uncertainty is more important than getting better information

CATASTROPHIC LOSS PROBLEM

• Plane-catching problem• How long to allow for the trip to the airport ?• d = decision

x = uncertainty ( actual travel time )k = marginal cost per minute of leaving earlierM = loss due to missing the plane

Plane-catching problem continued ….

• Loss function 0 if d > x

L ( d , x ) = k ( d – x ) + M if d < x

• k and M are positive

• M > k ( d – x ) for all d , x

Plane-catching problem continued ….

- 60 - 50 - 40 - 30 - 20 - 10 0

300

0

150

L ( d, x = 35 ):Loss as a function of d

M : Loss due to missing plane

k ( d – x ) = Wasted time

d : departure time ( minutes before plane )

Loss (in min )

Plane-catching problem continued ….

• x is uncertain and the decision is subjective• f ( x ) = subjective probability density function• Baye’s decision (dy) will allow x such that ,

kf (dy) =

M

Plane-catching problem continued ….

• In case we ignore uncertaintyd iu = x 0.5

where, x 0.5 = median value

• d iu will lead us to miss the plane half the time

• EVIU = E [ L (diu , x) ] - E [ L (dy , x) ]

• EVPI = E [ L (dy , x) ]

HEURISTIC FACTORS

• Four heuristic factors contribute to understand the EVI

1.Uncertainty (about parameter value)

2. Informativeness (the extent to which the current uncertainty may be reduced)

Heuristic factors continued …

3. Promise(the probability that improved information will result in a different decision and the magnitude of the resulting gain)

4. Relevance(the extent to which uncertainty about the parameter to overall uncertainty)

Heuristic factors continued …

Example,Whether to permit or prohibit use of a food additive

• Expected social cost = Number of life years lost• θ = Risk from consuming additive

= Excess cancer risk• K = Expected social cost with use of substitute

in case additive is prohibited

Heuristic factors continued … • f 0 = probability distribution representing

current information about θ• f + = probability distribution representing θ

is hazardous• f- = probability distribution representing θ is

safe• L0 = expected social cost if additive is

permitted• L1 = expected social cost if additive is

permitted / prohibited after research

Heuristic factors continued …

• EVI = L0 – L1

• Condition: Additive is permitted if and only if L0 > K

• Substantial chance that additive is riskier than alternative and should be prohibited

Prohibit additive use

Permit additive use

f - f +

f 0

K L1

L0

θ

Expected Social Loss

Effect of greater prior uncertainty

Prohibit additive use

Permit additive use

f - f +

f 0

K L1

L0

θ

L-

Expected Social Loss

Effect of greater informativeness

Prohibit additive use

Permit additive use f - f +

f 0

K L1

L0

θ

Expected Social Loss

Effect of greater promise

Prohibit additive use

Permit additive use

f - f +

f 0

K L1

L0

θ

Expected Social Loss

Effect of relevance

θ1

θ2

Permit additive use Prohibit additive use

RISK PREMIUM

• How to measure the monetary value of risk ?• Let,

a = uncertain monetary rewardw = initial wealtha + w = terminal wealthU (w + a) = Utility function

Selling price of risk

• Rs = selling price of risk

= sure amount of money a decision maker would be willing to receive to sell the risk a

• { w + Rs } ~* { w + a }

• Under expected utility function,U {w + Rs} = EU {w + a}

Bid price of risk

• Rb = bid price of risk

= sure amount of money a decision maker would be willing to pay to buy the risk a

• { w } ~* { w + a - Rb }

• Under expected utility function,U {w} = EU {w + a - Rb }

Risk premium• R = risk premium

= sure amount of money one would be willing to receive to become indifferent between receiving the risky return a versus receiving the sure amount

[E(a) – R]• { w + a } ~* { w + E(a) - R }

• EU {w + a } = U { w + E(a) - R }

Risk premium continued …

• v = { w + E(a) - R } = certainty equivalent

• R = E [v] – v E [v] = expected loss

• Willingness to insure

COMBINING PRIORS• A single expert report an idiosyncratic

perception of a consensus

• Useful if use combined judgments

• Aggregation procedure – Weighted average– Bayes’ rule– Copula method

Example • Climate sensitivity ( Morgan and Keith , 1995)

• ∆ T 2x = equilibrium increase in global annual mean surface temperature as a result of doubling of atmospheric CO2 from its pre-industrial concentration

Example continued …

• Estimates gathered from different experts.• All experts are treated equal.• Range given by IPCC: 1.5 to 4.50C• Most likely value : 2.50C• Tails extended to account underestimation• Sensitivity analysis : include / exclude expert 5

Example continued …

-5

0

5

10

15

20

1 2 3 4a 4b 5 6 7 8 9 10 11 12 13 14 15 16

∆ T 2x

Experts

Example continued …

• PDFAll Experts

Excluding 5

All experts with exponential tails

BEST ESTIMATE

• xiu = Mean of the SPD

• What to choose – mean, median or mode ?• If Mean >> Median

Make decision based on median and ignore uncertainty

• If Mean ~ Median Make decision considering possibility of extreme scenarios

IN CONCLUSION• “As for me, all I know is I know nothing.”

Socrates

• Expected Value of Information depends upon the expected benefits of Socratic wisdom (i.e. admitting one’s limits of knowledge) relative to the expected benefits of perfect wisdom (i.e. knowing the truth).

THNAK YOU !