+ All Categories
Home > Documents > Lecture 8: Probability: Examples and terminology 8.1 ... 8.pdf · Lecture 8: Probability: Examples...

Lecture 8: Probability: Examples and terminology 8.1 ... 8.pdf · Lecture 8: Probability: Examples...

Date post: 06-Mar-2018
Category:
Upload: ngoanh
View: 222 times
Download: 3 times
Share this document with a friend
4
Lecture 8: Probability: Examples and terminology 8.1 Reading Assignment for Lectures 7--9: PKT Chapter 5 (Problem Set 1 due today) Short course on Gaussian integrals: Useful Integrals: I 0 a () = dx e "ax 2 0 # $ = 1 2 % a I 1 a () = dx xe "ax 2 0 # $ = 1 2 a Note: From these two formulas, you can derive I n a () = dx x n 0 " # e $ax 2 for any positive integer n, since d da I n a () = " dx x n +2 0 # $ e "ax 2 = " I n +2 a () . E.g., I 2 a () = " d da I 0 a () = " # 2 d da a "1/2 = # 4 a "3/2 . Where do these come from? I 1 a () is easy; I 0 a () requires a trick: I 1 a () = 1 2 a (2 axdx)e "ax 2 0 # $ = 1 2 a dye " y 0 # $ = 1 2 a , QED where we used the substitution y = ax 2 , dy = 2 axdx . Now for I 0 a () : Consider the 2D integral J " dA plane # e $ar 2 . Can be done in polar coordinates as: J = 2 "rdr ( ) 0 # $ e %ar 2 = 2 "I 1 a () = " a or, alternatively, in Cartesian coordinates as: J = dxdy "# # $ e "ax 2 + y 2 ( ) = dx "# # $ e "ax 2 % & ( ) * 2 = 2 I 0 a () [ ] 2 . Equating these two results gives the formula for I 0 a () in the box above. With this background, we can now check the moments of the Gaussian distribution: 1 = dx "# # $ 1 % 2 & e " x " x 0 ( ) 2 2% 2 = 1 % 2 & 2 I 0 1 2% 2 ( ) * + , - = 1 % 2 & 2 2 2&% 2 = 1 (expected) x = 1 " 2 # dx x $ x 0 ( ) + x 0 [ ] $% % & e $ x $ x 0 ( ) 2 2" 2 = x 0 (by symmetry about x 0 ) and
Transcript

Lecture 8: Probability: Examples and terminology 8.1 Reading Assignment for Lectures 7--9: PKT Chapter 5 (Problem Set 1 due today) Short course on Gaussian integrals: Useful Integrals:

!

I0 a( ) = dxe"ax2

0

#$ =

1

2

%

a

!

I1 a( ) = dx xe"ax2

0

#$ =

1

2a

Note: From these two formulas, you can derive

!

In a( ) = dx xn

0

"# e

$ax2 for any positive integer n,

since

!

d

daIn a( ) = " dx x

n+2

0

#$ e

"ax2 = "In+2 a( ) .

E.g.,

!

I2 a( ) = "d

daI0 a( ) = "

#

2

d

daa"1/2 =

#

4a"3/2 .

Where do these come from?

!

I1 a( ) is easy;

!

I0 a( ) requires a trick:

!

I1 a( ) =1

2a(2axdx)e

"ax2

0

#$ =

1

2adye

"y

0

#$ =

1

2a, QED

where we used the substitution

!

y = ax2, dy = 2axdx .

Now for

!

I0 a( ): Consider the 2D integral

!

J " dAplane

# e$ar2 .

Can be done in polar coordinates as:

!

J = 2"rdr( )0

#$ e

%ar2 = 2"I1 a( ) ="

a or, alternatively, in

Cartesian coordinates as:

!

J = dxdy"#

#$ e

"a x2+y2( )

= dx"#

#$ e

"ax2%

& '

(

) *

2

= 2I0 a( )[ ]2

.

Equating these two results gives the formula for

!

I0 a( ) in the box above. With this background, we can now check the moments of the Gaussian distribution:

!

1 = dx

"#

#$

1

% 2&e

"x"x0( )2

2% 2 =1

% 2&'2I0

1

2% 2

(

) *

+

, - =

1

% 2&'2

22&% 2 =1 (expected)

!

x =1

" 2#dx x $ x0( ) + x0[ ]

$%

%& e

$x$x0( )2

2" 2 = x0 (by symmetry about x0) and

!

x2 =

1

" 2#dx x $ x0( )

2+ 2xx0 $ x0

2[ ]$%

%& e

$x$x0( )2

2" 2

=1

" 2#2I2 a( ) + x0

2'

( ) *

+ , =" 2 + x0

2,

8.2

where I have used the substitution

!

y = x " x0.

We then calculate the variance:

!

x " x0( )2

= x2" x

2= #

2 + x02( ) " x02 =# 2,

showing that the notation for the Gaussian distribution is consistent. Joint Probability Distributions, Statistical Independence, and Correlations: Suppose we now look at joint observation of two (or more) random variables, e.g., .

!

" ,# (discrete) or

!

x,y (continuous).

In general, we have a (joint) probability

!

pk,l

",#( ) that σ has the value σk and τ has the value τl, or, for the

continuous case,

!

dxdyp x,y( ) that the observed x value lies between x and x+dx and y lies between y and y+dy. (With similar conditions of positivity and normalization.) If the variables σ,τ or x,y are “statistically independent”, then (e.g.) p(x,y) is a product

!

p x,y( ) = p1 x( )p2 y( ) of single-variable (normalized) probability distributions.

When this is true, then averages like

!

xy = x y or, more generally,

!

f x( )g y( ) factorize according to

!

f x( )g y( ) " dxdyp x,y( )# f x( )g y( ) = dxp1 x( ) f x( )#[ ] dyp2 y( )g y( )#[ ] = f x( ) g y( ) . Comment: This notation can be misleading, since

!

f x( )g y( ) doesn’t depend on x and y but, rather, on the probability distribution P. thus, it makes more sense to write the above result as,

!

fgp=p1p2

= fp1

gp2

.

If this factorization is NOT true, then the two variables are “correlated.” Note: If the two variables x and y are governed by the same probability distribution (two identical dice), then the single-variable distributions p1 and p2 are the same; however, that’s not necessarily so (two independent dice, each dishonest but in a different way). Discrete Example: Two +/- variables σ and τ:

General distribution is

!

pkl =p++ p+"

p"+ p""

#

$ %

&

' ( with

!

pkl " 0

pklkl

# = p++ + p+$ + p$+ + p$$ =1

%

& '

( '

When

!

pkl = pk pl with p+=p, p-=q, then

!

pkl =p2

pq

pq q2

"

# $ $

%

& ' ' .

Suppose

!

pkl =0 1/2

1/2 0

"

# $

%

& ' .

Q: Can this be represented as a distribution of two independent variables? A: Obviously not. Think what it means: The outcomes of σ and τ are completely “anticorrelated”. When σ=+, than τ=- and vice versa, each outcome with probability ½ (normalization).

Note:

!

" = 1( ) # p++ + p+$( ) + ($1) p$+ + p$$( ) = 0 8.3 But

!

"# = (1) $ p++ + p%%( ) + %1( ) $ p%+ + p+%( ) = %1. Example: Particles in a box. First consider a single particle by itself. If there is some sort of potential (e.g., gravitational) acting inside the box, then the particle is more likely to be found where that potential gives it a lower energy, i.e., at or near the bottom of the box. (more later) But, assume for the moment that such is NOT the case. Then, we might reasonably assume that the position of the particle in the box was equally probable anywhere in the box. Thus, we might expect, the position probability distribution might be,

!

puniformr r ( ) =

1

V,r r inside box

0,r r outside box

"

# $

% $ , where V is the box volume, so that

!

dV

V is the probability that the

particle will be found inside a particular element of box volume dV. Note that this is normalized in the sense that

!

dVpuniformbox

"r r ( ) =1.

Now, put a second particle in the box, and let’s ask what is the joint position probability for particle 1 AND particle 2. If the positions of the two particles are statistically independent, then we might expect

!

pjoint (r r 1,

r r 2) =

1

V2

= puniformr r 1( )puniform

r r 1( ) inside the box (and zero outside).

This makes sense (and we shall use it) ONLY if the two particles have no interaction with one another. If the particles have a hard core, so that they cannot approach closer than some core diameter d, then

!

pjointr r 1,

r r 2( ) = 0 for

r r 1 "

r r 2 < d , which is certainly NOT consistent with the above. Similarly, if

particles 1 and 2 attract one another, then we expect

that

!

pjointr r 1,

r r 2( ) >

1

V2for some region

r r 1 "

r r 2 > d .

Both these effects, hard core repulsion and nearby attraction, are important for dense gases. The Random walks, coin tossing, and why you can sometimes predict averages with confidence. Let’s look at an N-step random walk in 1D or, equivalently, a game in which you toss a coin N times winning $1 every time the coin comes up heads and losing $1 every time is comes up tails. We will assume that the coin is honest, so that every time it is tossed, it has equal probability

!

p+ = p" =1/2 of coming down head or tails or, equivalently, a random step of length, say, a has equal probability of being to the right and landing you at x=a or being to the left and landing you at x=-a. Both these situations correspond to a random process described by N statistically independent two-state

random variables

!

" n{ }n=1N with

!

" n = ±1 for all n independently,

!

p"1,"2,...,"N= p"1

p"2...pN ,with p" =

1

2#",+1 + #",$1( ). (note product form)

Note: for homework I will ask you to do the case

!

p+ = p, p" = q “biased walk” ,“loaded coin” The net length

!

X of the walk (or, if you prefer, the net sum M of your winnings/losses (in dollars)) is

described by a variable

!

M =X

a= " nn=1

N

# = N+ $ N$ , where

!

N± is the total number of + /–

tosses/steps over the N trials.

Note for future use:

!

N = N+ + N"M = N+ " N"

# $ % N±{ =

1

2N ± M( ) 8.4

Question: What can we predict with certainty about M? For small, N the answer is “not much.” After N tosses, the possible outcomes can be any number between –N and +N in steps of two. So, we can say with certainty for, e.g., N=10, that the result M will be one of the even integers -10,-8,-6, …,6,8,10. Question: If you had to bet on the result, what number would you choose? For N=1, our two choices are M=+1 or -1. Each outcome has a 50% probability of being right. But, for N>1, you expect that numbers near M=0 are better bets than numbers near M=+/- N. E.g., for N=2, there are four equally likely results M(++)=2, M(+-)=M(-+)=0, and M(--)=-2, each with

probability

!

1

2

"

# $ %

& ' N

(1

4 for N=2. Thus, you have a 50% chance of being right if you guess M=0 but

only a 25% chance if you guess M=+/- 2. What I want to do now is to discuss first some moments of M and then the full probability distribution for M.

“Mean of M”

!

M = " nn=1

N

# = " nn=1

N

# = 0 , since

!

" n{ }n=1N are independent random variables

with

!

" n =1

2# +1( ) +

1

2# $1( ) = 0 for each n.

Similarly,

!

M2

= " nn=1

N

#$

% &

'

( )

2

= " n2

n=1

N

# + "m" nm*n# = N , where I have separated the

diagonal and off-diagonal terms. For

!

m " n ,

!

"m" n = "m " n = 0, since σm and σn are

statistically independent and

!

" n = 0. But,

!

" n2 =

1

2# +1( )

2+1

2# $1( )

2=1.

Result: Although the average of M is zero, the variance

!

"2

="N2# M

2$ M

2= N"1

2= N , i.e.,

the rms (root mean square) value of M,

!

M2

= N , i.e., expected values of M after N steps/tosses

are within

!

N of the mean (zero). This does not allow us to predict M; but, it does guide our guesses. For large N,

!

N << N , so the range of “reasonable” guesses is significantly narrowed. We will now see how to quantify this:

-N +N 0

N1/2

M

PN,M


Recommended