Random Variables

Post on 31-Dec-2015

20 views 0 download

Tags:

description

Random Variables. : 9 10 11 12. Numerical Outcomes. Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6) (2,1) (2,2) (2,3) (2,4) (2,5) (2,6) (3,1) (3,2) (3,3) (3,4) (3,5) (3,6) - PowerPoint PPT Presentation

transcript

Random Variables

Numerical Outcomes• Consider associating a numerical value with each sample

point in a sample space.

(1,1) (1,2) (1,3) (1,4) (1,5) (1,6)(2,1) (2,2) (2,3) (2,4) (2,5) (2,6)(3,1) (3,2) (3,3) (3,4) (3,5) (3,6)(4,1) (4,2) (4,3) (4,4) (4,5) (4,6)(5,1) (5,2) (5,3) (5,4) (5,5) (5,6)(6,1) (6,2) (6,3) (6,4) (6,5) (6,6)

:9101112

• The function relating each outcome from a roll of the die with their sum is considered a random variable. • Refer to values of the random variable as events. For example, {Y = 9}, {Y = 10}, etc.

Probability Y = y• The probability of an event, such as {Y = 9}

is denoted P(Y = 9).• In general, for a real number y,

the probability of {Y = y} is denoted P(Y = y), or simply, p( y).

• P(Y = 10) or p(10) is the sum of probabilities for sample points which are assigned the value 10.

• When rolling two dice, P(Y = 10) = P({(4, 6)}) + P({(5, 5)}) + P({(6, 4)}) = 1/36 + 1/36 + 1/36 = 3/36

Discrete Random Variable• A discrete random variable is a random variable

that only assumes a finite (or countably infinite) number of distinct values.

• For an experiment whose sample points are associated with the integers or a subset of integers, the random variable is discrete.

Probability Distribution• A probability distribution describes the probability

for each value of the random variable.

Presented as a table, formula, or graph.

y p(y)2 1/363 2/364 3/365 4/366 5/367 6/368 5/369 4/3610 3/3611 2/3612 1/36

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

2 3 4 5 6 7 8 9 10 11 12

Probability Distribution• For a probability distribution:

y p(y)2 1/363 2/364 3/365 4/366 5/367 6/368 5/369 4/3610 3/3611 2/3612 1/36 = 1.0

( ) 1y

p y

Here we may take the sum just over those values of y for which p(y) is non-zero.

And, of course,

0 ( ) 1, for all .p y y

Expected Value

• The “long run theoretical average”• For a discrete R.V. with probability function p(y),

define the expected value of Y as:

( ) ( )y

E Y y p y

• In a statistical context, E(Y) is referred to as the mean and so E(Y) and are interchangeable.

For a constant multiple…

• Of course, a constant multiple may be factored out of the sum

( ) ( ) ( )

( ) ( )

y

y

E cY c y p y

c y p y cE y

• Thus, for our circles, E(C) = E(2R) = 2E(R).

For a constant function…

• In particular, if g(y) = c for all y in Y, then E[g(Y)] = E(c) = c.

( ) ( )

( ) ( )(1)

y

y

E c c p y

c p y c c

Function of a Random Variable

• Suppose g(Y) is a real-valued function of a discrete random variable Y. It follows g(Y) is also a random variable with expected value

[ ( )] ( ) ( )y

E g Y g y p y

• In particular, for g(Y) = Y2, we have

2 2[ ] ( )y

E Y y p y

Try this!

• For the following distribution:y - 2 0 1 4 5 7

p(y) 0.10 0.15 0.20 0.25 0.25 0.05

• Compute the values E( Y ), E( 3Y ), E( Y2 ), and E( Y3 )

For sums of variables…

• Also, if g1(Y) and g2(Y) are both functions of the random variable Y, then

1 2 1 2

1 2

1 2

1 2

[ ( ) ( )] ( ( ) ( )) ( )

[ ( ) ( ) ( ) ( )]

( ) ( ) ( ) ( )

[ ( )] [ ( )]

y

y

y y

E g Y g Y g Y g Y p y

g Y p y g Y p y

g Y p y g Y p y

E g Y E g Y

All together now…

• So, when working with expected values, we have

1 2 1 2[ ( ) ( )] [ ( )] [ ( )]E g Y g Y E g Y E g Y

( ) ( ), and ( ) .E cY cE y E c c • Thus, for a linear combination Z = c g(Y) + b,

where c and b are constants:

( ) [ ( ) ]

[ ( )] ( )

[ ( )]

E Z E cg Y b

E cg Y E b

c E g Y b

Try this!

• For the following distribution:y - 2 0 1 4 5 7

p(y) 0.10 0.15 0.20 0.25 0.25 0.05

• Compute the values E( Y2 + 2 ), E( 2Y + 5 ), and E( Y2 - Y )

Variance, V(Y)

• For a discrete R.V. with probability function p(y), define the variance of Y as:

2( ) [( ) ]V Y E Y

• Here, we use V(Y) and interchangeably to denote the variance. The positive square root of the variance is the standard deviation of Y.

• It can be shown that

• Note the variance of a constant is zero.

2( ) c ( )V cY b V Y

Computing V(Y)• And applying our rules for expected value, we

find variance may be expressed as2 2 2

2 2

( ) [( ) ] [ 2 ]

[ ] (2 ) [ ] [ ]

V Y E Y E Y Y

E Y E Y E

2 2

22 2 2

[ ] (2 )( )

[ ] or [ ] [ ]

E Y

E Y E Y E Y

(as the mean is a constant)

When computing the variance, it is often easier to use the formula

22( ) [ ] [ ]V Y E Y E Y

Try this!

• For the following distribution:y - 2 0 1 4 5 7

p(y) 0.10 0.15 0.20 0.25 0.25 0.05

• Compute the values V(Y) , V(2Y), and V(2Y + 5).

• How would you compute V(Y2) ?

“Moments and Mass”

• Note the probability function p(y) for a discrete random variable is also called a “probability mass” or “probability density” function.

• The expected values E(Y) and E(Y2) are called the first and second moments, respectively.

Continuous Random Variables

Continuous Random Variables

• For discrete random variables, we required that Y was limited to a finite (or countably infinite) set of values.

• Now, for continuous random variables, we allow Y to take on any value in some interval of real numbers.

• As a result, P(Y = y) = 0 for any given value y.

CDF• For continuous random variables,

define the cumulative distribution function F(y) such that

( ) ( ),F Y P Y y y

lim ( ) 0 and

lim ( ) 1

y

y

F y

F y

Thus, we have

PDF

• For the continuous random variable Y, define the probability density function as

[ ( )]( ) ( )

d F yf y F y

dy

for each y for which the derivative exists.

Integrating a PDF

• Based on the probability density function,we may write

( ) ( )y

F y f t dt

Remember the 2nd Fundamental Theorem of Calc.?

Properties of a PDF

• For a density function f(y):

• 1). f(y) > 0 for any value of y.

• 2). ( ) ( ) 1f t dt P Y

Density function, f(y) Distribution function, F(y)

Try this!

• For what value of k is the following function a density function?

(1 ), for 0 1( )

0, otherwise

ky y yf y

( ) ( ) 1f t dt P Y

• We must satisfy the property

Try this!

• For what value of k is the following function a density function?

0.2 , for 0( )

0, otherwise

yke yf y

( ) ( ) 1f t dt P Y

• Again, we must satisfy the property

P(a < Y < b)

• To compute the probability of the eventa < Y < b ( or equivalently a < Y < b ),we just integrate the PDF:

( ) ( ) ( ) ( )b

aP a Y b F b F a f t dt

5

3(5) (3) ( )F F f t dt

Try this!

• For the previous density function

(1 ), for 0 1( )

0, otherwise

ky y yf y

• Find the probability

• Find the probability

(0.4 1)P Y

( 0.4 | 0.8)P Y Y

Try this!

• Suppose Y is time to failure and2

1 , for 0( )

0, otherwise

ye yF y

• Find the probability

• Find the probability

( 2)P Y

( 1 | 2)P Y Y

• Determine the density function f (y)

Expected Value, E(Y)

• For a continuous random variable Y, define the expected value of Y as

( ) ( ) , if it exists.E Y y f y dy

• Note this parallels our earlier definition for the discrete random variable:

( ) ( )y

E Y y p y

Expected Value, E[g(Y)]

• For a continuous random variable Y, define the expected value of a function of Y as

[ ( )] ( ) ( ) , if it exists.E g Y g y f y dy

• Again, this parallels our earlier definition for the discrete case:

[ ( )] ( ) ( )y

E g Y g y p y

Properties of Expected Value

• In the continuous case, all of our earlier properties for working with expected value are still valid.

( ) ( )E c c f y dy c

( ) ( )E aY b aE Y b

1 2 1 2[ ( ) ( )] [ ( )] [ ( )]E g Y g Y E g Y E g Y

Properties of Variance

• In the continuous case, our earlier properties for variance also remain valid.

2 2 2( ) [( ) ] ( ) [ ( )]V Y E Y E Y E Y

2( ) ( )V aY b a V Y

and

Problem from MAT 332

• Find the mean and variance of Y, given

0.2, 1 0

( ) 0.2 1.2 , 0 1

0, otherwise

y

f Y y y