of 22
7/29/2019 Referenca - Gish & Pierce
1/22
Analysis of 1-D Nested Lattice
Quantization and Slepian-Wolf Coding for
Wyner-Ziv Coding of i.i.d. Sources
Lori A. Dalton
ELEN 663: Data CompressionDept. of Electrical Engineering
Texas A&M University
Due: May 9, 2003
Abstract
Entropy coded scaler quantization (ECSQ) performs 1.53 dB from optimal
lossy compression at high data rates. In this paper, we generalize this result for
distributed source coding with side information. 1-D nested scaler quantization
with Slepian-Wolf coding is compared to Wyner-Ziv coding of i.i.d. Gaussian
sources at high data rates. We show that nested scaler quantization with
Slepian-Wolf coding performs 1.53 dB from Wyner-Ziv at high data rates.
1
7/29/2019 Referenca - Gish & Pierce
2/22
1 Introduction
It has been shown that lossless compression of a scaler quantized source, or entropy
coded scaler quantization, performs 1.53 dB from optimal lossy compression for highdata rates. In this report, we find analogous results for distributed source coding
of 2 correlated Gaussian sources. Fig. 1 illustrates nested scaler quantization with
Slepian-Wolf coding (NSQ-SW). One source is assumed to be imperfectly known to
the receiver, while the second source serves as perfectly known side information. Both
sources are processed independently from each other at the encoder, but are decoded
jointly. The first source undergoes 1-D nested lattice quantization (a generalization of
scaler quantization) and lossless Slepian-Wolf coding. We compare this performance
to Wyner-Ziv coding, which is illustrated in Fig. 2. This paper claims that NSQ-SW
coding performs 1.53dB from Wyner-Ziv at high data rates.
Figure 1: Distributed source coding with nested scaler quantization and Slepian-Wolf
(NSQ-SW) coding.
Figure 2: Distributed source coding with Wyner-Ziv.
To motivate this problem, we first consider the case where both sources are uncor-
related. Thus, the side information at the receiver does not help to decode the first
source. This degenerate case reduces to the classical problem of comparing ECSQ
with optimal lossy compression. ECSQ performs 1.53 dB worse (see Section 2.4),
indicating that at least in the uncorrelated source case, NSQ-SW performs 1.53 dB
worse than Wyner-Ziv coding.
2
7/29/2019 Referenca - Gish & Pierce
3/22
2 Preliminaries
In this section, we review concepts that will be used throughout this paper. The
source model is described in Section 2.1. In Sections 2.2 and 2.3, we review Slepian-Wolf coding and Wyner-Ziv coding, respectively. Finally, in Sections 2.4 and 2.5, we
cover entropy coded scaler quantization and nested scaler quantization, respectively.
2.1 Source Model
Two i.i.d. sources, X and Y, are jointly Gaussian with correlation coefficient . We
will consider Y to be known perfectly at the decoder as side information, while X is
compressed with loss. To model the correlation between X and Y, we use Y = X+Z,
where X and Z are independent Gaussian random variables with
X N(0, 2X), (1)
Z N(0, 2Z), (2)
Y N(0, 2Y), (3)
where 2Y = 2X +
2Z,
2 = 1 2Z2Y
, and 2X|Y = 2X(1
2) =2X
2Z
2X+2
Z
. We denote the
PDF of X by fX(x).
2.2 Slepian-Wolf Coding
The setup of Slepian-Wolf coding is shown in Fig. 3. The idea is to optimally
compress X independently from Y without loss, knowing that X and Y will be
decoded jointly. Surprisingly, [2] shows that we can achieve the same rate encoding
both sources separately as we can encoding them jointly. That is, the achievable rates
using Slepian-Wolf coding and the setup in Fig. 4 are equivalent. The achievable rate
region is given by (4)-(6) and shown in Fig. 5.
R1 H(X | Y), (4)
R2 H(Y | X), (5)
R = R1 + R2 H(X, Y). (6)
3
7/29/2019 Referenca - Gish & Pierce
4/22
Figure 3: Slepian-Wolf coding.
Figure 4: Lossless joint source coding.
Figure 5: Achievable rate region with Slepian-Wolf or joint source coding.
4
7/29/2019 Referenca - Gish & Pierce
5/22
2.3 Wyner-Ziv Coding
Wyner-Ziv coding [3] generalizes Slepian-Wolf coding, allowing now for lossy com-
pression ofX. In general, the Wyner-Ziv coding scheme in Fig. 2 loses rate compared
to the joint source encoding scheme shown in Fig. 6. However, when X and Y are
zero mean, stationary, and memoryless Gaussian sources, the performance in both
cases are identical for the MSE distortion metric. In this case, achievable rates for
the source X are lower bounded by
RWZ(D) =1
2log+
2X(1
2)
D
, (7)
where
log
+
x = max{log x, 0}. (8)
Figure 6: Lossy joint source coding.
2.4 Entropy Coded Scaler Quantization
We now consider performance of ECSQ for high rates. The setup for ECSQ is shown
in Fig. 7. We find the high rate performance of ECSQ by following the approach
in [4].
Figure 7: Entropy coded scaler quantization (ECSQ).
For large rates, it has been shown that the optimal scaler quantizer for ECSQ is
uniform for all smooth PDFs [1]. We use the quantizer shown in Fig. 8, with the
following properties:
5
7/29/2019 Referenca - Gish & Pierce
6/22
1. Quantizer bins have width q.
2. Quantization codeword for bin i is
xi = iq, where i = 0, 1, 2, . . ..
3. The thresholds between bins are the midpoints of the quantization levels,
namely ti =xi1+xi
2 = (i 12)q.
4. Quantization intervals are given by Ri = [ti, ti+1).
Figure 8: Uniform Quantization for ECSQ.
For small q, we compute the MSE distortion of this quantizer.
D = E
d(X, X)=
i=RifX(x)d(x,
xi)dx
=
i=
(i+ 12)q
(i 12)q
fX(x) (x iq)2 dx
=
i=
fX (iq)
(i+ 12)q
(i 12)q
(x iq)2 dx
=
i=
fX (iq)q3
12
=q2
12
i= fX (iq) q=
q2
12
fX(x)dx
=q2
12. (9)
6
7/29/2019 Referenca - Gish & Pierce
7/22
Compressing the quantized source losslessly does not change distortion. Let pi
be the probability that X is in bin i. Then for high rates, pi = fX (iq) q. The
corresponding entropy is given by
H( X) = i=
pi log2 pi
=
i=
fX (iq) qlog2 (fX (iq) q)
=
i=
fX (iq) qlog2 fX (iq)
i=
fX (iq) qlog2 q
=
fX(x)log2 fX(x)dx log2 q
fX(x)dx
= h(X) log2 q. (10)
With efficient entropy coding, R = H( X). Then from (10),q = 2h(X) R. (11)
Combining (11) with (9), the rate-distortion function for ECSQ is given by
DECSQ(R) =1
1222h(X)22R. (12)
Fig. 9 shows the setup for optimal lossy compression of source X. Performance
of the scheme in Fig. 9 is bounded by [5]:
DL(R) =1
2e22h(X)22R D(R) 222R. (13)
Moreover, D(R) = DL(R) for high rates. Thus, ECSQ performs e/6, or 1.53 dB
from the optimal lossy encoder. Note for Gaussian sources, DECSQ(R) =e2
6 22R
and D(R) = 222R.
Figure 9: Lossy encoding of a source, X.
7
7/29/2019 Referenca - Gish & Pierce
8/22
2.5 Nested Scaler Quantization
Nested scaler quantization is similar to scaler quantization, except now each bin
consists of sub-bins uniformly spaced over the range of the source, X. This quantizer
is illustrated in Fig. 10 with the following properties:
1. N is the number of bins. Define NL and NU to be the index of the lowest
and highest quantization levels, respectively. Note N = NU + NL + 1.
2. Sub-bins within each bin have width q. Bin-groups have width Nq.
3. The quantization codeword for bin i is
xi = iq, i = NL, . . . , N U.
4. The thresholds between sub-bins are ti = (i 1
2
)q, i = 0, 1, 2, . . ..
5. Quantization intervals are given by Ri = j [ti+jN, ti+jN+1), where i is the
bin index and j is the bin-group index.
Figure 10: Uniform nested scaler quantization.
8
7/29/2019 Referenca - Gish & Pierce
9/22
3 High Rate Performance of NSQ-SW Coding
In this section, we analyze nested scaler quantization with Slepian-Wolf coding. We
first find the distortion between X and the estimated source output, denoted by X.We use the mean-square error (MSE) measure of distortion, d(X, Y) = (X Y)2.
Since Slepian-Wolf coding is lossless, it does not affect distortion. We then find the
achievable rate of the Slepian-Wolf encoder. From these results, we present a rate-
distortion bound for NSQ-SW coding. For high rate analysis, the bin-group width,
N q, is fixed to a constant and q 0.
3.1 Distortion From Nested Scaler Quantization
The joint decoder sees the side information, Y = y, and the nested scaler quantized
source, X = xi = iq. Thus the receiver knows the bin index, i, and uses Y to estimatethe bin-group index, j, to isolate the specific sub-bin that X is in. We denote the
estimated bin-group by j, which corresponds to the sub-bin within bin i that is closest
to Y. The final estimate ofX at the receiver is X = xi(y) = (i + Nj(y iq))q, where
j(z) = z
N q+
1
2. (14)
Suppose x = (i + Nj)q+ , || < q2 and y = x + z. The receiver computes
j(y iq) = (i + N j)q+ + z iq
Nq+
1
2 = j +
+ z
Nq+
1
2.
Thus, if + z < Nq2 or + z >Nq2 , we will choose j incorrectly. i.e., if Y lies
further than Nq2
from the scaler quantized version of X, we will choose the incorrect
bin-group. For brevity, we will sometimes denote j(y iq) with simply j.
9
7/29/2019 Referenca - Gish & Pierce
10/22
For a given realization of the side information, Y = y, the distortion is
D(y) = E
d(X, X)|Y = y)
= NUi=NL
Ed(X, X)|Y = y, X Ri)Pr(X Ri)=
NUi=NL
xRi
fX|Y(x|y)(x xi(y))2dx
=
NUi=NL
j=
ti+Nj+1ti+Nj
fX|Y(x|y)(x xi(y))2dx
=NU
i=NL
j=fX|Y((i + Nj)q|y)
ti+Nj+1
ti+Nj
(x xi(y))2dx. (15)
Note, ti+Nj+1ti+Nj
(x xi(y))2dx =
q3
12+ (xi(y) (i + Nj)q)
2q
=q3
12+ ((i + Nj)q (i + Nj)q)2q
=q3
12+ (j j)2(Nq)2q. (16)
Therefore, at high rates the distortion is given by
D(y) =NU
i=NL
j=
fX|Y((i + Nj)q|y)
q3
12+ (j j)2(N q)2q
=q2
12
NUi=NL
j=
fX|Y((i + Nj)q|y)q
+ (Nq)2NU
i=NL
j=
fX|Y((i + Nj)q|y)(j j)2q. (17)
10
7/29/2019 Referenca - Gish & Pierce
11/22
Also note,
NU
i=NL
j=fX|Y((i + Nj)q|y)q =
j=NUq
NLq
fX|Y(x + Nqj|y)dx
=
j=
(Nj+NU)q(NjNL)q
fX|Y(x|y)dx
=
fX|Y(x|y)dx
= 1. (18)
Furthermore, since j(z+ Nqk) = j(z) k, and plugging (17) into (18),
D(y) = q2
12+ (N q)2
NUi=NL
j=
fX|Y((i + Nj)q|y)(j j)2q
=q2
12+ (Nq)2
j=
NUqNLq
fX|Y(x + Nqj|y)(j(y x) j)2dx
=q2
12+ (N q)2
fX|Y(x|y)(j(y (x Nqj)) j)2dx
=q2
12+ (N q)2
fX|Y(x|y)j(y x)2dx
=
q2
12 + (N q)2
EX|Yj(Y X)2|Y = y . (19)The average distortion over all realizations of Y is
D = EY [D(Y)]
=q2
12+ (Nq)2EX,Y
j(Y X)2
=
q2
12+ (Nq)2EZ
j(Z)2
. (20)
We may view the first term in (20) as distortion due to scaler quantization. The
second term is distortion due to choosing the wrong bin-group.
11
7/29/2019 Referenca - Gish & Pierce
12/22
In addition, notice
E
j(Z)2
=
fZ(z)j(z)2dz
=
j=
Nq(j+ 12 )Nq(j 1
2)
fZ(z)j(z)2dz
=
j=
Nq(j+ 12)
Nq(j 12)
fZ(z)j2dz
=
j=
j2Nq(j+ 1
2)
Nq(j 12)
fZ(z)dz
=
j=j2
Q
Nq
Z j
1
2 Q
Nq
Z j +
1
2, (21)
where Q(x) is the Q-function. (21) may be simplified:
u(w)
w2
j=
j2
Q
w
j
1
2
Q
w
j +
1
2
= limL
Lj=L
j2
Q
w
j
1
2
Q
w
j +
1
2
= lim
LuL(w), (22)
where,
uL(w) =L
j=L
j2Q
w
j
1
2
Lj=L
j2Q
w
j +
1
2
=L
j=L
j2Q
w
j
1
2
L+1j=L+1
(j 1)2Q
w
j
1
2
= L2
Q
w
L
1
2
Q
w
L +
1
2
+
Lj=L+1
(j2 (j 1)2)Q
w
j 12
. (23)
12
7/29/2019 Referenca - Gish & Pierce
13/22
For large L, the following approximation is accurate.
uL(w) = L2 +
L
j=L+1(2j 1)Q
w
j
1
2= L2 +
0j=L+1
(2j 1)Q
w
j
1
2
+
Lj=1
(2j 1)Q
w
j
1
2
= L2 +L1j=0
(2j 1)
1 Q
w
j +
1
2
+
Lj=1
(2j 1)Q
w
j
1
2
= L2 L2 L1j=0
(2j 1)Q
w
j +
1
2
+
L1j=0
(2j + 1)Q
w
j +
1
2
= 2
L1
j=0
(2j + 1)Qwj + 12 . (24)So u(w)
w2= 2
j=0(2j + 1)Q(w(j +
12
)) and,
Ej(Z)2
= 2
j=0
(2j + 1)Q
Nq
Z
j +
1
2
. (25)
Finally, the average distortion is given by
D =q2
12+ 2(Nq)2
j=0(2j + 1)QNq
Z j +1
2 . (26)This is similar to the distortion for ECSQ, with an additional bin-group error term.
Note if Z = 0 (Y = X), then Z = 0, D =q2
12, and there is no bin-group error.
Fig. 11 shows the rate-distortion curve for 1-D nested lattice quantization without
Slepian-Wolf coding for various Nq. Fig. 12 shows the optimal Nq that minimizes
distortion in (26) for fixed N = 2R.
It is interesting to see if the optimal Nq without Slepian-Wolf coding converges
to a constant as N . Note
D = (Nq)2
1222R + 2(N q)2
j=0
(2j + 1)QN qZ
j + 12 . (27)
If Nq does converge, then D has a floor of 2(Nq)2
j=0(2j + 1)QNq
Z
j + 12
as
N . This floor is minimized (eliminated) when Nq , thus N q does not
converge to a constant.
13
7/29/2019 Referenca - Gish & Pierce
14/22
0 2 4 6 8 10 12 14 16 18 2010
14
1012
1010
108
106
104
102
100
R = log2
N (bits/c.u.)
DNSQ
(R)
Nq = 5 Z
Nq = 7.5 Z
Nq = 10 Z
Nq = 12.5 Z
Nq = 15 Z
Figure 11: Rate-distortion curve for 1-D nested lattice quantization without Slepian-
Wolf coding, 2Z = 0.01.
0 50 100 150 200 2501
2
3
4
5
6
7
8
9
10
11
N = 2R
OptimalNq/
Z
Figure 12: Optimal N q for 1-D nested lattice quantization without Slepian-Wolf
coding.
14
7/29/2019 Referenca - Gish & Pierce
15/22
3.2 Achievable Rate with Slepian-Wolf Coding
Once X is processed by a nested scaler quantizer, we encode
X losslessly with min-
imum rate. Knowing that the decoder will also have side information Y, recall from
Section 2.2 that this is a Slepian-Wolf problem, and that the minimum achievable
rate for source X is R = H( X|Y). For a given realization of the side info Y = y,H( X|Y = y) = NU
i=NL
pi(y)log2 pi(y), (28)
where for high rate,
pi(y) = xRifX|Y(x|y)dx
=
j=
ti+Nj+1ti+Nj
fX|Y(x|y)dx
=
j=
fX|Y((i + Nj)q|y)q, (29)
for i = NL, . . . , N U. Define f(x) to be the N(0, 1) distribution. Then fX|Y(x|y) =1
X|Yf
xyX|Y
and
pi(y) = qX|Y
j=
f(i + Nj)q yX|Y
, (30)Define g(x) = 1
X|Y
j= f
x+NqjX|Y
so pi(y) = g(iq y)q. Thus, entropy is ap-
proximately given by,
H( X|Y = y) = NUi=NL
g(iq y)qlog2 [g(iq y)q]
= NUq
NLq
g(x y)log2 [g(x y)q] dx
=
NUqNLq
g(x y)log2 g(x y)dx log2 q
NUqNLq
g(x y)dx. (31)
15
7/29/2019 Referenca - Gish & Pierce
16/22
Furthermore,
NUq
NLq
g(x y)dx =1
X|Y NUq
NLq
j=f
x + N qj y
X|Y dx
=1
X|Y
j=
(Nj+NU)q(NjNL)q
f
x y
X|Y
dx
=1
X|Y
f
x y
X|Y
dx
= 1. (32)
Plugging (32) into (31), we have for high rates
H( X|Y = y) = NUqNLq
g(x y)log2 g(x y)dx log2 q. (33)
Notice g(x N qk) = g(x) k Z, so g(x) is periodic with period Nq. We simplify
(33) further:
H( X|Y = y) = NUqNLq
g(x y)log2 g(x y)dx log2 q
= 1
X|Y
NUqNLq
j=
f
x + Nqj y
X|Y
log2 g(x y)dx log2 q
= 1X|Y
j=
(Nj+NU)q(NjNL)q
fx yX|Y
log2 g(x Nqj y)dx log2 q=
1
X|Y
f
x y
X|Y
log2 g(x Nqj y)dx log2 q
= 1
X|Y
f
x y
X|Y
log2 g(x y)dx log2 q
=
f(x)log2 g(X|Yx)dx log2 q. (34)
16
7/29/2019 Referenca - Gish & Pierce
17/22
Notice H( X|Y = y) is independent of y! Averaging over all side info, Y, we haveH(
X|Y) = H(
X|Y = y), i.e.,
H( X|Y) =
f(x)log2 g(X|Yx)dx log2 q
=
f(x)log2
1
X|Y
j=
f
x +
Nqj
X|Y
dx log2 q
=
f(x)log2
j=
f
x +
Nqj
X|Y
dx + log2 X|Y log2 q. (35)
In general, H( X|Y) must be computed numerically.3.3 Summary of Main Results
For high rate and assuming efficient entropy coding (R = H( X|Y)), we have shownD =
q2
12+ 2Zu
Nq
Z
, (36)
R = h
Nq
X|Y
+ log2 X|Y log2 q, (37)
where
u(x) = 2x2j=0
(2j + 1)Qxj + 12 , (38)
h(v) =
f(x)log2
j=
f(x + vj)
dx. (39)
Combining D and R through q, we have for NSQ-SW coding:
DNSQSW(R) =1
1222h Nq
X|Y 2X|Y22R + 2Zu
N q
Z
. (40)
Recall for Wyner-Ziv coding of Gaussian sources, the rate-distortion bound is
given by (7), or DWZ(R) = 2X|Y2
2R. When Nqis large enough for uNq
Z
to be neg-
ligible, this shows that nested scaler quantization performs 10 log10
112
22h Nq
X|Y
dB from Wyner-Ziv coding.
17
7/29/2019 Referenca - Gish & Pierce
18/22
As N q , the 1-D nested lattice quantizer becomes a regular uniform scaler
quantizer. We have u(NqZ
) = 0 and h( NqX|Y
) = 12
log2 2e. This case yields
DSQSW(R) =
e
6 2X|Y2
2R
, (41)
which is 1.53 dB from Wyner-Ziv coding. This result is parallel to the performance
difference between ECSQ and optimal lossy encoding.
Fig. 13 and 14 show high rate distortion for NSQ-SW coding with 2X = 1 and
various values of Nq for 2Z = 0.1 and 2Z = 0.01, respectively. Notice h
(v) controls
the horizontal shift of the rate-distortion curve, while u(x) determines the error floor.
3.4 Choosing Nq
Note as Nq decreases, H( X|Y) decreases and distortion increases. Thus we areinterested in finding the optimal Nq to trade off rate and distortion.
First, to minimize distortion we must minimize u(x). A plot of u(x) is shown
in Fig. 15. Notice that choosing N q 10Z results in an error floor corresponding
to u(x) 105. To control the horizontal shift of DNSQSW(R), we consider h(v),
which is shown in Fig. 16. Notice h(v) is upper bounded by 12 log2 2e, and choosing
N q 5X|Y results in a rate very close to the maximum.
Relatively row rates are achievable by using smaller bin groups, but the distortionfloor starts to dominate for a fixed Nq as we increase rate. To find the optimal Nq,
we use the lower convex hull of all rate-distortion curves for different Nq. At a
given rate, the optimal Nq corresponds to the rate-distortion curve that lies on the
lower convex hull at that rate. In general, for finite Nq, high rate NSQ-SW actually
performs slightly better than 1.53 dB from Wyner-Ziv. However, this effect becomes
negligible as we increase rate. Optimal N q for 2Z = 0.1 and 2Z = 0.01 are shown in
Fig. 17 and Fig. 18, respectively.
For Nq 5X|Y, where h
(v)
= h(Z) log2 Z, we can approximate the best
N q by choosing Nq just large enough so that for the target rate, uNqZ
is barely
negligible compared to 11222h Nq
Z 22R.
18
7/29/2019 Referenca - Gish & Pierce
19/22
0 5 10 15 20 2510
16
1014
1012
1010
108
106
104
102
100
R (bits/c.u.)
DNSQSW
(R)
Nq = 5 X|Y
Nq = 7.5 X|Y
Nq = 10 X|Y
Nq = 12.5 X|Y
Nq = 15 X|Y
Nq
Figure 13: High rate D(R) for NSQ-SW with 2X = 1 and 2Z = 0.1.
0 5 10 15 20 2510
18
1016
1014
1012
1010
108
106
104
102
100
R (bits/c.u.)
DNSQSW
(R)
Nq = 5 X|Y
Nq = 7.5 X|Y
Nq = 10 X|Y
Nq = 12.5 X|YNq = 15
X|Y
Nq
Figure 14: High rate D(R) for NSQ-SW with 2X = 1 and 2Z = 0.01.
19
7/29/2019 Referenca - Gish & Pierce
20/22
0 2 4 6 8 10 12 14 16 18 2010
20
1015
1010
105
100
105
x = N q/Z
u(x)
Figure 15: Choosing N q: u(x).
0 1 2 3 4 5 6 7 8 9 106
5
4
3
2
1
0
1
2
3
v = N q /X|Y
Entropy(bits/c.u.)
h(v)1/2 log
22e
Figure 16: Choosing Nq: h(v).
20
7/29/2019 Referenca - Gish & Pierce
21/22
0 1 2 3 4 5 6 7 8 9 1018.5
19
19.5
20
20.5
21
21.5
22
R (bits/c.u.)
OptimalNq/
X|Y
Figure 17: Optimal N q for 2X = 1 and 2Z = 0.1.
0 1 2 3 4 5 6 7 8 9 1018
18.5
19
19.5
20
20.5
21
R (bits/c.u.)
OptimalNq/
X|Y
Figure 18: Optimal N q for 2X = 1 and 2Z = 0.01.
21
7/29/2019 Referenca - Gish & Pierce
22/22
4 Conclusion
We have shown that high rate NSQ-SW coding for Gaussian sources performs 1.53
dB from Wyner-Ziv coding, which is the same gap between ECSQ and optimal lossycompression. We also computed the rate-distortion function for NSQ-SW for high
data rates and fixed bin-group widths. An interesting problem to pursue beyond this
paper is the performance of 2-D or higher dimensional nested lattice quantization
with Slepian-Wolf coding.
References
[1] H. Gish and J. N. Pierce, Asymptotically efficient quantizing, IEEE Trans.Inform. Theory, vol. 14, pp. 676683, Sep. 1968.
[2] D. Slepian and J. K. Wolf, Noiseless coding of correlated information sources,
IEEE Trans. Inform. Theory, vol. 19, pp. 471480, July 1973.
[3] A. Wyner and J. Ziv, The rate-distortion function for source coding with side
information at the decoder, IEEE Trans. Inform. Theory, vol. 22, pp. 1-10,
Jan. 1976.
[4] D. S. Taubman and M. W. Marcellin, JPEG 2000: Image Compression Funda-
mentals, Standards and Practice. Kluwer Academic Publishers, Norwell, Mas-
sachusetts, 2002.
[5] T. M. Cover and J. A. Thomas, Elements of Imformation Theory. Wiley, New
York, 1991.
22