Date post: | 07-Aug-2018 |
Category: |
Documents |
Upload: | habibhabib |
View: | 226 times |
Download: | 0 times |
of 8
8/20/2019 Eece 522 Notes_24 Ch_10b
1/18
1
10.5 Properties of Gaussian PDF
To help us develop some general MMSE theory for the GaussianData/Gaussian Prior case, we need to have some solid results for
joint and conditional Gaussian PDFs.
We’ll consider the bivariate case but the ideas carry over to thegeneral N -dimensional case.
8/20/2019 Eece 522 Notes_24 Ch_10b
2/18
2
Bivariate Gaussian Joint PDF for 2 RV’s X and Y
!!!!!
"
#
$$$$$
%
&
''
(
)
**
+
,
-
-
''
(
)
**
+
,
-
--. -
! ! ! ! "! ! ! ! #$
formquadratic
y
xT
y
x
y
x
y
x y x p
/
/
/
/
0
1
1/2 2
1exp
||2
1),( C
C ''(
)
**+
,.
12
134
15
167
''(
)
**+
,
Y
X
Y
X E
/
/
''
(
)
**
+
,.
''
(
)
**
+
,.
''(
)
**+
,.
2
2
2
2
)var(),cov(
),cov()var(
Y Y X
Y X X
Y YX
XY X
Y X Y
Y X X
8 8 98
8 98 8
8 8
8 8 C
-10 -5 0 5 10-8
-6
-4
-2
0
2
4
6
8
x
y
-8-6
-4-2
02
46
8
-10
-5
0
5
10
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
xy
p ( x , y
)
x y
p ( x , y
)
x
y
8/20/2019 Eece 522 Notes_24 Ch_10b
3/18
3
Marginal PDFs of Bivariate GaussianWhat are the marginal (or individual) PDFs?
:;
;-. dy y x p x p ),()( :
;
;-. dx y x p y p ),()(
We know that we can get them by integrating:
After performing these integrals you get that:
X ~ N (/X, var{ X }) Y ~ N (/Y , var{Y })
-10 -5 0 5 10-8
-6
-4
-2
0
2
4
6
8
x
y
x
y
p( x)
p( y)
8/20/2019 Eece 522 Notes_24 Ch_10b
4/18
4
Comment on “Jointly” Gaussian
We have used the term “Jointly” Gaussian…
Q: EXACTLY what does that mean?A: That the RVs have a joint PDF that is Gaussian
!!!
"
#
$$$
%
&
''(
)
**+
,
-
-
''(
)
**+
,
-
-
-. - y
xT
y
x
y
x
y
x
y x p/
/
/
/
0 11/2 2
1exp
||2
1),( C
C
We’ve shown that jointly Gaussian RVs also have Gaussian
marginal PDFs
Q: Does having Gaussian Marginals imply Jointly Gaussian?
In other words… if X is Gaussian and Y is Gaussian is it
always true that X and Y are jointly Gaussian???
A: No!!!!!
Example for
2 RVs
See Reading Notes on
“Counter Example”
posted on BB
8/20/2019 Eece 522 Notes_24 Ch_10b
5/18
5
We’ll construct a counterexample: start with a zero-mean,
uncorrelated 2-D joint Gaussian PDF and modify it so it is no
longer 2-D Gaussian but still has Gaussian marginals.
12
13
4
15
16
7
!! "
#
$$%
&
<-
. 2
2
2
2
2
1exp2
1),(
Y X Y X XY
y x y x p
8 8 8 08 x
y
x
y
But if we modify it by:• Setting it to 0 in the shaded regions
• Doubling its value elsewhere
We get a 2-D PDF that is not
a joint Gaussian but the
marginals are the sameas the original!!!!
8/20/2019 Eece 522 Notes_24 Ch_10b
6/18
6
Conditional PDFs of Bivariate Gaussian
What are the conditional PDFs?
If you know that X has taken value X = xo, how is Y distributed?
''(
)
**+
,
==.
1616258.016258.025C
Slope of Line
cov{ X ,Y }/var{ X } = 98Y /8 X
:;
;-
..dy y x p
y x p
x p
x x p x y p
),(
),(
)(
)|()|(
0
0
0
00
Slice @ xo
Normalizer
-15 -10 -5 0 5 10 15
-15
-10
-5
0
5
10
15
x
y
p( y|X=5)
p( y)Note: Conditioning on correlated RV
• shifts mean
• reduces variance
8/20/2019 Eece 522 Notes_24 Ch_10b
7/18
7
Theorem 10.1: Conditional PDF of Bivariate Gaussian
Let X and Y be random variables distributed jointly Gaussian
with mean vector [ E { X } E {Y }]T and covariance matrix
'
'
(
)
*
*
+
,.
'
'
(
)
*
*
+
,.
2
2
)var(),cov(
),cov()var(
Y YX
XY X
Y X Y
Y X X
8 8
8 8 C
Then p( y| x) is also Gaussian with mean and variance given by:
> ?
> ?}{}{
}{}{}|{ 2
X E xY E
X E xY E x X Y E
o
X
Y
o X
XY
o
-
8/20/2019 Eece 522 Notes_24 Ch_10b
8/18
8
Impact on MMSE
We know the MMSE of RV Y after observing the RV X = xo:@ Ao x X Y E Y .. |ˆ
So… using the ideas we have just seen:if the data and the parameter are jointly Gaussian, then
> ?}{}{}|{ˆ
2 X E xY E x X Y E Y o X
XY
o MMSE -
8/20/2019 Eece 522 Notes_24 Ch_10b
9/18
9
Theorem 10.2: Conditional PDF of Multivariate Gaussian
Let X (k =1) and Y (l =1) be random vectors distributed jointlyGaussian with mean vector [ E {X}T E {Y}T ]T and covariance
matrix
'
'
(
)
*
*
+
,
==
==.
'
'
(
)
*
*
+
,.
)()(
)()(
l l k l
l k k k
YYYX
XYXX
CC
CCC
Then p(y|x) is also Gaussian with mean vector and covariance
matrix given by:
> ?}{}{}|{ 1 XxCCYxXY XXYX E E E oo - ?}{}{}|{ 2 X E xY E x X Y E o X
XY o -
8/20/2019 Eece 522 Notes_24 Ch_10b
10/18
10
10.6 Bayesian Linear Model
Now we have all the machinery we need to find the MMSE for
the “Bayesian Linear Model”
wH!x
8/20/2019 Eece 522 Notes_24 Ch_10b
11/18
11
Bayesian Linear Model is Jointly Gaussian
and w are each Gaussian and are independent
Thus their joint PDF is a product of Gaussians…
…which has the form of a jointly Gaussian PDF
Can now use: a linear transform of jointly Gaussian is jointly Gaussian
''(
)
**+
,
''(
)
**+
,.
''(
)
**+
,
w
!
0I
IH
!
x Jointly Gaussian
Thus, Thm. 10.2 applies! Posterior PDF is…
! Joint Gaussian
! Completely described by its mean and variance
8/20/2019 Eece 522 Notes_24 Ch_10b
12/18
12
Conditional PDF for Bayesian Linear Model
To apply Theorem 10.2, notationally let X = x and Y = .
First we need E {X} = H E { } + E {w} = H B
E {Y} = E { } = B
And also !YY CC . > ?> ?@ A
> ?C D > ?C D@ A> ?> ?@ A @ AT T T
T
T
E E
E
E E E
wwHµ!µ!H
wµ!Hwµ!H
xxxxC
!C
!!
!!
XX
8/20/2019 Eece 522 Notes_24 Ch_10b
13/18
13
> ?> ?@ A
> ?> ?@ A> ?> ?@ AT T
T
T
E
E
E
Hµ!µ!
HµwH!µ!
µxµ!CC
!!
!!
x!!xYX
--.
- ? > ?!w!!! HµxCHHCHCµx!!
-
8/20/2019 Eece 522 Notes_24 Ch_10b
14/18
14
Ex. 10.2: DC in AWGN w/ Gaussian Prior
Data Model: x[n] = A + w[n] A & w[n] are independent
),(~ 2 A A N 8 / ),0(~ 2 N
Write in linear model form:
x = 1 A + w with H = 1 = [ 1 1 … 1]T
Now General Result gives the MMSE estimate as:
)-()(
)-()(}|{ˆ
12
2
2
2
1222
AT AT A
A
AT
AT
A A MMSE A E A
/
8
8
8
8 /
/ 8 8 8 /
1x11I1
1xI111x
-
-
8/20/2019 Eece 522 Notes_24 Ch_10b
15/18
15
Aside: Matrix Inversion Lemma
> ? > ? 1111111 ------- ? uAuAuuA
AuuA 1
1111
1 -
----
8/20/2019 Eece 522 Notes_24 Ch_10b
16/18
16
Continuing the Example… Apply the Matrix Inversion Lemma:
)(/
1
)(/
)(/
)(ˆ
222
2
222
2
222
2
1
2
2
2
2
A
A
A A
AT
A
T A A
A
A
T T A
A
AT AT A
A MMSE
N x N N
N
N
N
N
A
/
8 8 8
8 /
/
8 8 8
8 /
/
8 8 8
8 /
/
8
8
8
8 /
-!!
"
#$$%
& <
-> 82 A
),
gain is large, data has large use
A MMSE A / Eˆ
x A MMSE Eˆ
8/20/2019 Eece 522 Notes_24 Ch_10b
17/18
17
Using similar manipulations gives:
N N
N A
A A
A
/
11
1)x|var(
22
22
22
8 8
8 8
8
8
<.
<
!!
"
#
$$%
&
.
Like || resistors… small one wins!
F var (A|x) is E the smaller of:• data estimate variance
• prior variance
N A A
/
11
)x|var(
122
8 8
8/20/2019 Eece 522 Notes_24 Ch_10b
18/18
18
10.7 Nuisance Parameters
One difficulty in classical methods is that nuisance parametersmust explicitly dealt with.
In Bayesian methods they are simply “Integrated Away”!!!!
Recall Emitter Location: [ x y z f 0]
In Bayesian Approach…
From p( x, y, z, f 0 | x) can get p( x, y, z | x):
Nuisance Parameter
:. 00 )x|,,,()|,,( df f z y x p z y x p xThen… find conditional mean for the MMSE estimate!