+ All Categories
Home > Documents > Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf ·...

Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf ·...

Date post: 18-Oct-2020
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
9
892 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979 Polar Quantization of a Complex Gaussian Randomvariable WILLIAM A. PEARLMAN, MEMBER, IEEE Abstruct-We solve numerically the optimum fixed-level non-uni- form and uniform quantization of a circularly symmetric complex (or bivariate) Gaussian random variable for the mean absolute squared error criterion. For a given number of total levels, we detemiine its factoriza- tion into the product of numbers of magnitude and phase levels that produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1sup to 1024, giving their optimal factorizations, minimum distortion, and entropy. For uncoded quantizer outputs, we find that the optimal splitting of rate between magnitude and phase, averaging to 1.52 and 1.47 bits more in the phase angle than magnitude for optimum and uniform quantization, respectively, compares well with the optimal polar coding formula of 1.376 bits of Pearlman and Gray [3]. We also comparetheperformance of polar to rectangular quantization by real and imaginary parts for both uncoded and coded output levels. We find that, for coded outputs, both polar quantizers are outperformed by the rectangular ones, whose distortion-rate curves nearly coincide with Peadman and Gray’s polar coding bound. For un- coded outputs, however, we determine that the polar quantizers surpass in performance their rectangular counterparts for all useful rates above 6.0 bits for both optimum and uniform quantization. Below this rate, the respective polar quantizers are either slightly inferior or comparable. I. INTRODUCTION Our purpose is to present comprehensive and exact numer- ical solutions of the optimum uniform and nonuniform polar quantizations of a circularly symmetric (bivariate) complex Gaussian random variable using theerrorcriterion of mean absolute squared difference. This random variable has inde- pendent and identically distributed real andimaginaryparts. Although a special case of the general bivariate Gaussian, it is one that arises quite often. The discrete Fourier transforms of complex and real stationary N-sequences (aside from the real- valued zero and N/2 terms for the real sequence)consist of elements with identically distributed and uncorrelated real and imaginary parts which approach Gaussian in distribution as N becomes large [ 11. Part of an optimal procedure for‘encoding a general bivariate (or multivariate) random variable is first to change to a co-ordinate system where the components are un- correlated [ 1, 21. The only real restriction in this case then is the identical distribution of the components, which is by far the more common occurrence. By polar quantization we mean quantization by magnitude and phase angle in contrast to the more common rectangular quantization by real and imaginary parts. Polar quantization has had applications in computer holography, DFT encoding, imageprocessing,and communi- cations [3, 4, 51. We consider both optimum non-uniform and optimum uniform quantizations, which, for the sake of brevity, we call “optimum” and “uniform,” respectively. We shall subsequently show that fixed-level optimum or uniform polar quantization surpasses the performance of the cor- Paper approved by theEditor for CommunicationTheory of the IEEE Communications Society for publication without oral presenta- tion. Manuscript received June 30, 1978; revisedNovember 20, 1978. This work was supported by the Graduate School, University of WiS- consin-Madison, Madison, WI. The author is with the Department of Electrical and Computer En- gineering, University of Wisconsin-Madison, Madison, WI 53706. responding optimal rectangular quantization (pair of Max [6] quantizers) for nearly all numbers of quantization output levels. This result has never before been corroborated, but has been hinted at by other workers. Recently, Gallagher [4] cal- culated, for the one case of the total number of levels about 400, that the optimum (non-uniform) and (optimum) uniform polar quantizers are both superior to their optimum rec- tangular counterparts using a pair of identical Max quantizers [6]. Pearlman and Gray [ 11 have derived optimal performance bounds for polar encoding and conclude that polar quantizing schemes may exist which surpass in performance the cor- respondingrectangular Max schemes.Powers [3] hasuseda convenient suboptimal polar quantizing scheme whose per- formance approached, but did not surpass optimal rectangular quantization. We address the cases of polar and rectangular quantization through no claims of optimality for either or both, but for the reason that they are certainly the most convenient. Their co- ordinate systems are the natural ones for expressing a complex variable; and the coordinates in each of these systems are statisticallyindependentfortherandomvariableundercon- sideration. The optimal quantization scheme may involve deci- sion regions in the complex plane which are not expressible as constant values of the coordinates in either of these coordi- nate systems [ 71. We do not touch upon this issue here. A comprehensive solution to the polar quantization problem is the specification of the decision and output levels for everyvalue of magnitude and phase angle, the minimum distortion (mean absolute squared error), and the entropy. We are able to provide these quantities for any number of levels inoptimal fixed-level quantizationforbothoptimum and uniform quantizations. Furthermore, we must solve the optimal allocation of levels between magnitude and phase angle for any given number of total levels. Gallagher 141 provided the optimal allocation of 12 magnitude and 33 phase levels for the one case of 396 total levels in both optimum and uniform polar quantization. Pearlman and Gray [ 11 have found that the phase angle should receive 1.376 bits more rate than the magnitude in an optimal polar encoding scheme. Gal- lagher’s result for fixed level quantization (non-entropy coded)corresponds to 1.46bitsmorein the phase than the magnitude. Here we have provided theoptimalnumbers of magnitude and phase levels for total numbers of useful levels from 1 to 1024inbothoptimumnon-uniformanduniform polar quantization. The definition of useful levels involves ac- ceptable values of distortion and will be explained in Section IV. For every useful level in optimum and uniform quantiza- tion, we have also calculated the distortion and the entropy. From these numbers we have generated graphs which show the superiority of polar over rectangular quantization in fixed-level schemes, but its inferiority in entropy coded schemes. Further- more, we are able to conclude by comparison with the polar coding bound of PearlmanandGray [ 1 ] that if you desire nearly optimum performance in the distortion-rate sense (minimum distortion for a fixed entropy) then you can obtain it much more easily with rectangular quantization. 11. OPTIMUM POLAR QUANTIZATION We turn now to the mathematical formulation of optimum polar quantization under the distortion criterion of minimum mean absolute squared error. We divide the complex plane into 0090-6778/79/0600-0892$00.75 0 1979 IEEE
Transcript
Page 1: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

892 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979

Polar Quantization of a Complex Gaussian Randomvariable

WILLIAM A. PEARLMAN, MEMBER, IEEE

Abstruct-We solve numerically the optimum fixed-level non-uni- form and uniform quantization of a circularly symmetric complex (or bivariate) Gaussian random variable for the mean absolute squared error criterion. For a given number of total levels, we detemiine its factoriza- tion into the product of numbers of magnitude and phase levels that produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s up to 1024, giving their optimal factorizations, minimum distortion, and entropy. For uncoded quantizer outputs, we find that the optimal splitting of rate between magnitude and phase, averaging to 1.52 and 1.47 bits more in the phase angle than magnitude for optimum and uniform quantization, respectively, compares well with the optimal polar coding formula of 1.376 bits of Pearlman and Gray [3]. We also compare the performance of polar to rectangular quantization by real and imaginary parts for both uncoded and coded output levels. We find that, for coded outputs, both polar quantizers are outperformed by the rectangular ones, whose distortion-rate curves nearly coincide with Peadman and Gray’s polar coding bound. For un- coded outputs, however, we determine that the polar quantizers surpass in performance their rectangular counterparts for all useful rates above 6.0 bits for both optimum and uniform quantization. Below this rate, the respective polar quantizers are either slightly inferior or comparable.

I. INTRODUCTION

Our purpose is to present comprehensive and exact numer- ical solutions of the optimum uniform and nonuniform polar quantizations of a circularly symmetric (bivariate) complex Gaussian random variable using the error criterion of mean absolute squared difference. This random variable has inde- pendent and identically distributed real and imaginary parts. Although a special case of the general bivariate Gaussian, it is one that arises quite often. The discrete Fourier transforms of complex and real stationary N-sequences (aside from the real- valued zero and N/2 terms for the real sequence) consist of elements with identically distributed and uncorrelated real and imaginary parts which approach Gaussian in distribution as N becomes large [ 11. Part of an optimal procedure for‘encoding a general bivariate (or multivariate) random variable is first to change to a co-ordinate system where the components are un- correlated [ 1, 21. The only real restriction in this case then is the identical distribution of the components, which is by far the more common occurrence. By polar quantization we mean quantization by magnitude and phase angle in contrast to the more common rectangular quantization by real and imaginary parts. Polar quantization has had applications in computer holography, DFT encoding, image processing, and communi- cations [3 , 4, 51. We consider both optimum non-uniform and optimum uniform quantizations, which, for the sake of brevity, we call “optimum” and “uniform,” respectively. We shall subsequently show that fixed-level optimum or uniform polar quantization surpasses the performance of the cor-

Paper approved by the Editor for Communication Theory of the IEEE Communications Society for publication without oral presenta- tion. Manuscript received June 30, 1978; revised November 20, 1978. This work was supported by the Graduate School, University of WiS- consin-Madison, Madison, WI.

The author is with the Department of Electrical and Computer En- gineering, University of Wisconsin-Madison, Madison, WI 53706.

responding optimal rectangular quantization (pair of Max [6] quantizers) for nearly all numbers of quantization output levels. This result has never before been corroborated, but has been hinted at by other workers. Recently, Gallagher [4] cal- culated, for the one case of the total number of levels about 400, that the optimum (non-uniform) and (optimum) uniform polar quantizers are both superior to their optimum rec- tangular counterparts using a pair of identical Max quantizers [6] . Pearlman and Gray [ 11 have derived optimal performance bounds for polar encoding and conclude that polar quantizing schemes may exist which surpass in performance the cor- responding rectangular Max schemes. Powers [3] has used a convenient suboptimal polar quantizing scheme whose per- formance approached, but did not surpass optimal rectangular quantization.

We address the cases of polar and rectangular quantization through no claims of optimality for either or both, but for the reason that they are certainly the most convenient. Their co- ordinate systems are the natural ones for expressing a complex variable; and the coordinates in each of these systems are statistically independent for the random variable under con- sideration. The optimal quantization scheme may involve deci- sion regions in the complex plane which are not expressible as constant values of the coordinates in either of these coordi- nate systems [ 71. We do not touch upon this issue here.

A comprehensive solution to the polar quantization problem is the specification of the decision and output levels for every value of magnitude and phase angle, the minimum distortion (mean absolute squared error), and the entropy. We are able to provide these quantities for any number of levels in optimal fixed-level quantization for both optimum and uniform quantizations. Furthermore, we must solve the optimal allocation of levels between magnitude and phase angle for any given number of total levels. Gallagher 141 provided the optimal allocation of 12 magnitude and 33 phase levels for the one case of 396 total levels in both optimum and uniform polar quantization. Pearlman and Gray [ 11 have found that the phase angle should receive 1.376 bits more rate than the magnitude in an optimal polar encoding scheme. Gal- lagher’s result for fixed level quantization (non-entropy coded) corresponds to 1.46 bits more in the phase than the magnitude. Here we have provided the optimal numbers of magnitude and phase levels for total numbers of useful levels from 1 to 1024 in both optimum non-uniform and uniform polar quantization. The definition of useful levels involves ac- ceptable values of distortion and will be explained in Section IV. For every useful level in optimum and uniform quantiza- tion, we have also calculated the distortion and the entropy. From these numbers we have generated graphs which show the superiority of polar over rectangular quantization in fixed-level schemes, but its inferiority in entropy coded schemes. Further- more, we are able to conclude by comparison with the polar coding bound of Pearlman and Gray [ 1 ] that if you desire nearly optimum performance in the distortion-rate sense (minimum distortion for a fixed entropy) then you can obtain it much more easily with rectangular quantization.

11. OPTIMUM POLAR QUANTIZATION

We turn now to the mathematical formulation of optimum polar quantization under the distortion criterion of minimum mean absolute squared error. We divide the complex plane into

0090-6778/79/0600-0892$00.75 0 1979 IEEE

Page 2: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27. NO. 6, JUNE 1919 893

N regions R,, = {z = rei%. . T , - ~ < r < r , , ep - l a <e,} where for m = 1, 2, -, M and p = 1, 2, -.., P. Here r and 0 are polar coordinates in the plane, ro = 0, rM = ~ , 8 , = 0; B p = 2n, and sin nlP N = MP. A fixed N-level polar quantizer of a complex random variable Z with magnitude R and'phase angle 0 and proba- bility density (z) = g(r)h(B) is a mapping of z = reie to

= ?,e* .B P , whenever z belongs to R,,. The sets { ( r m , ep))%o,p=o and {(;,, dp)}~zl,,=l are, respectively,

These solutions have been obtained previously by Powers i m p [3 ] , Gallagher [4] and Senge [ 81 among others. If we define

the decision and output levels of the quantizer. We choose the mapping to minimize

- 1 . r , = r , m = 0, 1, 2, ..e, M (4) sinc (1 /P)

where

and N = MP. The variance of Z is 2a2. The necessary condi- tions for a solution are that

- J r m - 1 rm = m = 1, 2, ..., M.

t ' m

The conditions (2a) and (2b) on the phase angle's decision and output levels yield

p = 1, 2, ..., P

g(r) dr

The scaled output levels Tm are the centroids of the Ray- leigh probability density in their respective decision ranges rmPl < r < r,. For a given number M of magnitude output levels, we can solve for the decision levels rm and the scaled output levels 7, without regard to the number P of phase output levels through Equations (5a) and (5b). Then for any given P we can determine the output magnitude levels through Equation (4). The decision levels rm are independent of the number of phase levels. In fact, Equations ( 5 ) (along with ro = 0 and rM = 00) are the same conditions for solving the Ray- leigh magnitude quantization alone. This problem has already been solved by Pearlrnan and Senge [9 ] , who derived a new algorithm for the solution and presented a complete tabulation of decision levels, output levels, minimum distortions, and entropies for M = 1 to 64. So these results can be used directly for rm and r, . It turns out that the minimum distortions can be put to use here as well. The magnitude distortion cor- responding to the solution in (5) is

-

When we calculate the distortion in Equation (1) at the sta- tionary points of Equations (3), we find that

which is the anticipated uniform quantization. The conditions ( 2 ~ ) and (2d) on the magnitude's decision and output levels at M the stationary phase angle levels of (3a) and (3b) are C ( M ) = r , 2 lrm g(r) dr .

with

m = l rm-1

) m = 1, 2, * - - , M - 1 (3c) It is now evident that for a given M it is only necessary to cal-

tion, and the quantity C ( M ) for the Rayleigh density g(r) only. (In the limit as M approaches infinity, we have lim~,, C(M) = j-0" r2g(r)dr = 2a2.) Then for any P such that P =

m = 1, 2, -., M (3d) N/M we can substitute into Equation (7) to find the total

minimality of D,(M) is guaranteed by Fleischer's sufficient

rm + ;m+l

S(P) culate the decision levels, output levels, the magnitude distor-

' m rg(r> d r

;, = S(P) 1,-1

d r ) d r distortion for complex Gaussian polar quantization. The

Page 3: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

894 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979

condition [ 101 that In g(r ) is convex downward. Minimality of the total distortion D , ( N ) is assured by the fact that the (Hessian) matrix of all second-order partial derivatives of D , ( N ) with respect to the decision and output variables r m , r m , at the stationary points in Equations (3-5) is positive definite.

-

111. UNIFORM POLAR QUANTIZATION

For uniform polar quantization we demand that, apart from a possibly semi-infinite end interval, both the decision and output levels of the magnitude be equally spaced. The optimal phase levels are already uniformly spaced as given in Equations (3a) and (3b). If we substitute that solution directly into the total mean-squared error distortion of (1) and inte- grate over 8, we obtain:

We must minimize D , ( N ) under the constraints that

r , = mh m = 1, 2 , -, M - 1

(9) ;m = (m - *)h m = 1, 2 , e-, M

where

ro = 0, rM = *, and N = MP.

According to these constraints, we need only determine the interval size h which minimizes D , ( N ) in (8). The necessary condition is that

where (10)

m h

QW = / g ( r ) d r ( m - 1 ) h

- is the probability of the mth magnitude decision interval and r , is its centroid as defined in (5b) with = (m-l)h and r , = mh. The sufficient condition for the solution in ( 1 0 ) to yield a minimum distortion is that the second derivative, which we calculate as

(11)

be positive. The distortion in (8) at the stationary points of (10) is

M

D,(N) = 2a2 - Q ( m ) [ ( 2 m - l)hS(P)Fm - ( m -+)2h2] m = l

M

= 2 a 2 - h 2 ( (m - i ) 2 Q ( m $ m = l

The optimal interval size h in Equation (10) can easily be solved by computer through a one-dimensional Newton-

Raphson technique. Such computations reveal that the second derivative in (1 1) is always positive, so that a minimum of D , ( N ) is always obtained. Although this uniform quantiza- tion is easy to implement since the interval size specifies all the decision and output levels, it is more difficult than the optimum (non-uniform) quantization in one aspect. Not only is the optimal h a function of the number M of magnitude levels, it is also a function of the number P of phase levels, as seen in Equation (10) where the value of P in S ( P ) must be substituted in order to solve for h. For larger values of P one is tempted to approximate S ( P ) by one with little loss of ac- curacy in the resulting h. In fact, Gallagher [ 4 ] used this approximation in his calculations. Then, we might use the values of h given by Pearlman and Senge [ 91 for Rayleigh uni- form quantization. Nevertheless, the best approximation at P = 6 4 gave errors in h up to .005, where we require our numerical errors to be less than 5 X lo-'. We therefore did not use any approximations, so that we could obtain solutions as accurate as possible. Fornon-uniform quantization, we could solve for all the decision and output levels for a given M with- out knowledge of P. For any given M in uniform quantization, a different optimal interval size must be specified for each given value of P. Since it necessarily involves such a lengthy listing, we do not present here a table of the optimal interval sizes for every value of M and P. We shall present a table (Table 2 ) of optimal interval sizes for useful values of M and P; as explained in the next section. Once an optimal value of h is known for given M and P , the distortion D , ( N ) is calculated through the second expression in ( 1 2 ) , which does not depend explicitly on P.

IV. OPTIMAL DIVISION OF MAGNITUDE 'AND PHASE LEVELS

The previous solutions for optimum and uniform magni- tude-phase angle quantization do not reveal how to select the numbers of magnitude and phase angle levels, M and P , respec- tively, that produce the smallest distortion for any given total number of levels N = MP. Our purpose now is to find these optimal choices of M and P, the resulting minimum distortion, and the entropy for any given N . The only exact results re- ported previously have been for the single case of N = 396 by Gallagher [4], who found that M = 12 and P = 33 (P/M = 2 .75) yield a minimum normalized distortion of 5.95 X for optimum quantization and 6.67 X for uniform quantization. These distortion values are each less than that obtained by their corresponding optimal (Max [ 6 ] ) quantiza- tion of real and imaginary parts. The result of quantizing the phase angle more finely than the magnitude has been noted previously by several workers in holography and image processing [3, 5 , 11, 12, 131. Among them is Powers [31, who obtained suboptimal solutions for the decision and output magnitude levels for some values of N less than 4 0 0 , as he split the magnitude into equiprobable decision intervals. In all cases, consistent with Gallagher, the number of phase levels exceeded the number of magnitude levels, the minimum ratio being 2.46 . Pearlman and Gray [ 11 have derived through distortion-rate theory formulas for optimal encoding of the logarithm of the magnitude and the phase angle. This scheme yields a mean-squared error distortion D, of z = reie which is nearly minimal. It provides a lower bound to this distortion of

Dz(R)> 1 -e~p(-(1.781)2-~} forR > 1.376 bits (13) 2a2

Page 4: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979 895

with R the information rate for the complex variable. The bound is quite tight for rates above 4.2 bits. The optimum choices for the information rates of the phase angle Re and the magnitude R, must satisfy

Rg - R , = 1.376 bits. (14)

For fixed level quantization, this splitting of rates corresponds to a PIM ratio of 2.60. We shall compare our subsequent quan- tization results with these optimal formulas both in the entropycoded output level case and in the uncoded case, where we can make the association

/

N = 2 R , M = 2 R r , a n d P = 2 R e .

Here N , M , and P may not be integers, whereas in actual quantization they must be.

The first task is to find the values of M and P that give the minimum distortion of D , ( N ) for any given value of N in both optimum and uniform polar quantization. Let us consider optimum quantization first. For any given value of N , we cal- culate D,(Ar) by Equation ( 7 ) for all integers M and P such that N = MP. (Determining all the MP factorizations of N requires little calculation and search on the computer since we need to try only integer values of M no larger than fi.) The magnitude distortion D,(M) and the quantity C ( M ) are pro- vided through the Rayleigh quantization program. We then note the minimum distortion and the factorization for the given value of N . We determined the minimum distortion and optimal (integral) factorizations in this manner for all N from 1 to 1024. We can now construct a table of optimal factoriza- tions and minimum distortions for each value of N . Such a table reveals that the distortion varies with N rather erratically. Certainly one finds the smaller distortions at some of the larger values of N , but also rather large distortions at other large values of N . The reason is that many values of N do not allow the degree of flexibility needed for factoring into a product of integers close to the optimum. For example, a prime number N allows only two factorizations, M = N, P = 1 and M = 1, P = N . Inevitably, the smaller distortion, which results from the larger number of phase levels ( M = 1, P = N ) , is larger than minimum distortions obtained with a smaller N . There are many values of N having relatively few factorizations that produce larger minimum distortions than those belonging to lower N numbers. Such values of N are not considered to be useful for quantization purposes and are expunged from our table. In fact, based on previous work [ 1, 31, it is safe to assume that a distortion for a value of N greater than 2 is neither minimal for that N or useful whenever the number of phase levels does not exceed the number of magnitude levels by at least a factor of 2. We used this information to avoid the calculation of D,(M) for all values of M up to 1024. If the factorization routine for a given N called for an M greater than 64, we automatically set the associated D,(N) to an arbitrarily high value which was subsequently eliminated in seeking either the minimum D,(N) for that N or the useful factorizations for all N .

Factorization inflexibility causes other values of N to have relatively low utility compared to others and hence are not considered to be useful, either. These values are also expunged from the table. There are two utility criteria. The first is that a unit increment in N does not produce enough of a decrease in distortion and the second is that it produces too much. In the

first case we eliminate the higher value because it does not pay off in enough of a distortion decrease. The decrease is con- sidered sufficient if the ratio of the absolute difference in logarithmic distortion to the increase in rate is greater than 0.5. In the second case, the payoff in distortion decrease is so large for incrementing the number of levels by just one that we consider it mandatory to use the next level and to eliminate the initial level. When the previous ratio exceeds 2.0, the initial level is eliminated and the incremental level is retained. Such tests for usefulness are applied at every value of N . We call this entire procedure our level sorting algorithm. The final table of factorizations and minimum distortions for each “useful” value of N appears in Table 1. The entropies of the quantiza- tion levels for each of these values of N also have been calcu- lated and are presented in the table.

The same philosophy for computing minimum distortions for each N and sorting out the useful values of N applies to uniform polar quantization. We first must solve for h in Equa- tion (1 0) for all combinations of M and P that allow us to find the minimum distortions D,(N) in Equation (1 2) for all N from 1 to 1024. At first, it may seem that we have to find h for all M and P from 1 to 1024. Based on previous experi- ence, we know that a minimum distortion is always obtained for at least twice as many phases as magnitude levels. There- fore, we solved for h only for M from 1 to 32 and P from 1 to 64. When we obtained a factorization of N that called for M or P greater than 32 or 64, respectively, we set its distortion to some high value which would be automatically eliminated by the level sorting algorithm. For every M and P where a solution for h is found, we substitute into D , ( N ) in Equation (1 2) to find the associated quantization distortion. We then find the minimum distortions for each N by searching the distortions for each factorization and sort out the useful values of N by the same procedure described for optimum quantization. In Table 2 are presented the optimum factorizations, minimum distortions, and entropies of the quantization levels for each useful value of N . Listed also in Table 2 is the optimal interval size for the magnitude quantization at every useful value of N . The minimum distortions for optimum and uniform polar quantization in Tables 1 and 2 are plotted versus the total number of quantization levels in Figure 1. As expected non- uniform quantization yields smaller distortions for a given number N of levels, but the difference is rather small for all N and indistinguishable at small N , where the larger number of phase levels, which are uniformly quantized for both cases, dominates the quantization effects.

V. COMPARISON TO OTHER SCHEMES We wish now to compare our polar quantization results

with corresponding ones for rectangular (real and imaginary part) quantization and with optimal encoding bounds. The minimum mean squared error fixed-level optimum and uni- form quantizations of a real Gaussian random variable have been solved by Max [ 5 1. Since the real and imaginary parts of a complex Gaussian variable are independent and identically distributed real Gaussian variables, the optimal quantization procedure is identical and independent optimal quan.tizations on the real and imaginary parts. For fixed level schemes, we would use Max quantizers with the same number of levels. Only when the number N of total levels is a perfect square is it a product of identical integral factors. We guess that we should only accept integral factorizations that give as close to iden- tical factors as possible, but we really do not know how close

Page 5: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

896 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979

TABLE 1 USEFUL LEVELS IN OPTIMUM POLAR QUANTIZATION

TOTAL MAGNITUDE PHASE NUMBER OF L E V E L S

1

t 2

10 12 14 16

20 18

24 21

21

3:

2 309 4 4 48

64 65 IO

8 5 90 96

108 102

114 120 126 1 3 141

168

io5

120

15: 1ie

2li :'2 $2; 3; 3x8

3:

192 200

216

2 j 3

26 1

290

310 319 320

352 360 363 g; 986s

:2: 420 429

iii 490

504 494

520

560 510

;:i 3:

zoo

i9; :a0

:Hi

t f i

d E [2J

g;

615

I 0 4 120 131 736s 77z2 I 2 816

882 900 912 91 8 93 1 828

; x 8 1020

1 1 1 1

1 2

2

2 2 3

3 4

4

4 5

5 5

6 6 6

5

2

I 7 I

8

9 f 9 9 9

10 9

10 10 10 10 11 10 11

11 10

11 11

11 12

12 11 12 12 12 12

1: 123 1 3 19 13 l 2 13

12

15 12 1s : a la ; a 1I l I I I 1i lI 1% 11 18

14 14 15 14

15 15 15 16

16 16

1 1

18

19 20 19

20 19

1

i 5 5 i

I 10

10 11 12 13 10 1 1 12 13 15

16

19 15 1i 1i 21 19

$7

$2

$1

16

20 19

20 21 22

22 :3 25

24 25 26

26 29

3 3 2; 30 32

31 i 2 30

31 E 35

3 !! 31

ai 39 II

ii 1: 2: 41 42

:: 42 45

:z 44 41

40

40

44

:2 45

ti 41 :1

46 :s 28 51, 38 2; 52 50 5 3 51 52 50 53

51

NORMALIZED D I S T O R T I O N

.100000+01

.681690+00 ,4628 2+00 .363$0+00 ,312 66+00

.154811+00

.110185+00

.103240+00

.201530-01

.1 314 01

,18106zIOl . 1 09 6 01 .1 4641-01

,142404-01

, 1 8 5 8 -01

. la2184101

.128110-01

,115254-01

,111324-01 ,1122111-01

.988785-02 ,953713-02

,135011-02 .733862-@$

,665546-02 .6 6 10-02 .6&5-02

:%88%31:02

,276263-02

.255768-02

.2?7231-02

.24242 02 :23788 02 240883I02

.23518?102

ENTROPY I N B I T S

,723612-08 .100000+01 .158496+01 .200000+01 ,23219 +01 .2584 z+Ol ,328 86+01 .354889+01

.509191+01

.520139+01

.521427+01

.5;5117+01

. 5 1130+01 5 9218+01

:529910+01

89321+01 .609986+01

:3;;22::1 :zoo0 2+01

.652155+01

.661001+01

.668802+01

:768z::8::1 .110960+01 .711313+01

:75:881::1

.185293+01

. I 8 521+01 1i9$30+01 193 68+01

0 121+01

.81 021+01

.812032+01

:81;8:%

:w;:11 .821223+01

.831662+01

.829986+01

.854810+01

.856290+01

.875223+01

.913021+01

:919580+01 .922049+01

.91211&+01 .91 3 +01 9182 4+01

:;:m::1

.943374+01

:;5;:;p01 .920319:!l

.965977+01

.963998+01

.961913+01

.96 725+01

.970769+01

:8f21:2:x1

TABLE 2 USEFUL LEVELS IN UNIFORM POLAR QUANTIZATION - TOTAL UACNITUDE PHASE

NUMBER OF LEVELS

1

1 1

1 2

2

2

5 3

4

4 5

5

1 6

1

1

1

1 IO

IO 10 10 10 10

10 11

11

12 11 12

1 2

12 13

il 1: :1 1 9 11 12 1 2

12 12 12 : a I I 11

18 18 ;a 1% 1% 1%

13

14

15

8

16

17

ii

18

11 19 20

:8 20 20

ENTROPY I N BITS

.oooooo

.100000+01

.158496+01

.200000+01

:$2%18%:81

.559926+01

:2:a:2:::1 :82:?!3:8: :eZ::lf:81 .649977+01

.672281+01

:39?352:81

.728400+01 ::4P2;8:81

:32::2"5:",1

,716 5 3 4 1 .722871+01 ,72893241 .734754+01

.150175+01

.764082+01

,76959241 .771250+01 .1~4905+01 .7 00 +Ol

.78979%01

.766255+01

.78493 *01

::8:iie::81 :88::8::81 :%llW%:81 .806899+01

.831322+01 :1332%:81 :%9?2@::8: :%:3%82:8: :8"::i:8":0":

.831664+01

.857470+01

.861184+01

:;; ;;pol .9Oi773:81 .908440+01 ,91005 4 1 .91l523+Ol

:8::575:81 :8:53:E

Page 6: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6 , JUNE 1979 897

n W

Q

0 z t

IO-^ - I .o IO IO2 103 10'

NUMBER OF LEVELS

Figure 1 Distortion versus number of levels for optimum and uniform polar quantizers.

they need to be. We therefore adapted our computer programs to treat the real-imaginary part cases. We used Max's tables for uniform quantization and our own program for optimum quantization [8, 91 to provide the required level number, dis- tortion, and entropy data. We calculated the distortions for every factorization of a given N from 1 to 1024, found the minimum distortion and associated factorization for each N, and ran our level sorting program to identify the useful level numbers, their minimum distortions, and factorizations. Not surprisingly, the useful values of N for both nonuniform and uniform quantization turned out to be only those where the numbers of real and imaginary part levels differed by zero or one.

In order to compare the performance of fixed-level rec- tangular and polar quantization, we have plotted in Figure 2 the logarithm (base 2) of the minimum distortion for the use- ful levels versus the rate in bits (base 2 logarithm of the number of levels) for optimum and uniform rectangular and polar quantizations. The significant result depicted in this plot is that both optimum and uniform polar quantizations out- perform their rectangular counterparts at the higher rates and are either comparable or insignificantly inferior at smaller rates. The performance crossover point for both types of quantization is at 6.0 bits, approximately. This means that for any N above 64, the distortion of rectangular quantization exceeds that of a polar quantization for the same or a lower value of N.* As the rates grow larger, the improvement of polar over rectangular for the uniform case becomes more significant than that for the nonuniform case. For example, at 9.8 bits (900 levels), the ratio of rectangular to polar distor- tion for the uniform and nonuniform cases are 1.27 and 1.07.

* In an independent effort [ 161, a performance crossover point of N equal to 101 is reported for optimum quantization. The method there is equivalent to the retention of all values of N that produce a monotonically decreasing distortion versus N characteristic without the subsequent deletion of values failing our rate of decrease utility criteria.

0 K t-

P -6

\ \

-10 0 2 4 6 e 0

RATE IN BITS

Figure 2 Fixed-level complex quantization-logarithmic distortion versus rate (logarithm of the number of levels) for optimum and uniform polar and rectangular quantizers.

respectively. We remark that nonuniform quantization is superior to uniform for every rate and for both quantization methods. As the rates grow larger, the curves appear to ap- proach straight lines. We have therefore run a least-squares, straight line fit to each of the four described quantization curves for rates exceeding 6.5 bits per complex variable. The resulting empirical formulas for the distortion D, versus the rate R in bits are as follows in decreasing order of asymptotic performance:

Optimum Polar:

D,/202 = (2.056)2-.977R, 7.69 < R < 10

Optimum Rectangular:

Uniform Polar:

D,/2a2 = (1.849)2-.939R, 7.26 < R < 10

Uniform Rectangular:

D,/202 = (1.358)2-.861R, 6.64 < R < 10

where R 5 log2N. (The standard errors of the weighted residu- als of the logarithmic (base 2) distortion are, with the excep- tion of the optimum rectangular's 6.1 X less than 3.5 X The formulas for the polar coding bound of Pearlman and Gray [ I ] and the ultimate performance bound of the distortion-rate function are, respectively,

Dpc/2a2 = 1 - exp {-(1.781)2-R}, R > 1.376

(1.7,8 1)2-R, R > 4.12

D(R)/202 = 2-R, R > 0

Page 7: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

898 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6, JUNE 1979

POLAR QUANTIZATITION RECTANGULAR QUANTIZATION

NIFORM POLAR

UNIFORM RECTANGULAR

L \

- l o " " ' " ' " ' ' " " " l " " " " ' " " " " " l " " " ' ' ' 0 2 4 ) 6 8 IO

RATE IN BITS Figure 3 Entropy-coded complex quantization-logarithmic distor-

tion versus rate (entropy) for optimum and uniform, polar and rectangular quantizers.

for R expressed in bits. These bounds are also plotted in Figure 2 for the purposes of comparison.

Some interesting conclusions can be reached when we con- sider entropy (Huffman) coding of the complex quantizers' output levels. As the minimum achievable output rate of a quantizer is now its entropy, we plot in Figure 3 the distortion versus entropy (in bits) of the rectangular and polar, optimum and uniform, quantizers.

For the larger rates, where all the curves appear to ap- proach straight lines, we ran a least squares straight line curve fit just as we did for the curves in the previous figure. (The standard errors of the weighted residuals of the logarithmic (base 2) distortion are all less than 3.2 X The results in decreasing order of asymptotic performance are:

Optimum Polar:

D,/202 = (2.00)2-.995R, 7.52 < R < 9.78

Uniform Polar:

D,/202=(2.20)2-1.012R, 7.11GRG9.58

Optimum Rectangular:

D,/202 = (1.786)2-.996R, 6.25 G R G 9.46

Uniform Rectangular:

D,/2u2 = (1.900)2-1.018R, 6.06 < R < 8.90.

The rate R equals the entropy in bits. The magnitudes of the coefficients of R in the exponents of the uniform quantization formulas exceed one for the range of rates given. Clearly, one cannot simply extrapolate these formulas to all higher rates, because their curves would eventually intersect the distortion-

rate function, which is the absolute lower bound on perform- ance.

A comparison of the curves in Figure 3 shows that both coded rectangular quantizations are superior to any coded polar quantization. In fact, the curve of uniform rectangular is slightly below and the nonuniform rectangular nearly coinci- dent with the polar coding bound. For the polar quantizers, the uniform one provides slightly lower distortion than the nonuniform one for any given rate, but 'as the rate grows larger their distortions become nearly equal. At these larger rates, the rate difference between the polar quantizations and the polar coding bound is about .30 bits. Closing this gap in performance by encoding polar coordinates very likely in- volves the rather difficult and computationally complex proce- dure of searching a tree or trellis for good codes. The best quantizer in the distortion versus output entropy sense will only close the gap marginally because of the dominant effect of the uniform quantization of the phase angle [ 141. It seems rather wasteful of resources to apply tree or trellis coding to polar coordinates when better performance can be obtained much more easily by buffer-instrumented variable-length (Huffman) codes or permutation codes [ 151 on the outputs of fixed-level uniform real and imaginary part (Max) quantizers. Our conclusion is that, unless there is some other practical reason for polar quantization, one should use complex rec- tangular coding if optimum distortion-rate performance is desired. For the more convenient fixed-level complex quanti- zations, whose performances are plotted in Figure 2, the polar schemes are preferable for nearly all rates.

We wish now to draw some conclusions about the optimal splitting of rate between magnitude and phase angle for polar quantization. From Tables 1 and 2 which give the optimal number of magnitude and phase levels, we can easily calcu- late the corresponding rate differences for all of these useful total level numbers. Since we have decided that polar quantiza- tion is advantageous for fixed-level schemes, we wish to com- pare the optimal rate divisions in these cases to the prescrip- tion of Re - R, = 1.376 bits for optimal polar encoding in Equation (14). The constraint of integral factorization does not allow, except perhaps in rare circumstances, the direct testing of this prescription. Our calculations of rate differences (or level ratios) from the tables show that the factorizations may come as close to the prescription as the integral factori- zation constraint would permit. The average rate difference may be a reliable indicator of the optimum for quantization, since it smoothes out the constraint effects. The averages of all the rate differences for total rates above 1.376 bitsare 1 1.52 and 11.47 bits (P/M.= 2.89 and P/M = 2.77) for optimum and uniform quantization, respectively. Considering that the pre- scription of 1.376 bits applies to optimal encoding and yields a lower encoding bound which is an accurate approximation to ultimate performance for rates exceeding 4.12 bits, it is a reasonably good indicator of how to divide the rate between magnitude and phase in a polar quantization scheme. The average rate differences of 1.52 bits and 1.47 bits can now be regarded as guidelines for splitting the rate between phase angle and magnitude for fixed-level optimum and uniform polar quantization, respectively.

VI. CONCLUSION

We have presented comprehensive and accurate numerical solutions to the fixed-level polar quantization of a complex Gaussian random variable with minimum mean absolute

Page 8: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-27, NO. 6 , JUNE 1979 899

squared error. Through our level sorting algorithm we have identified those levels judged useful for quantization purposes, as they exhibit the steepest decrease in distortion as rate in- creases. When we consider only the useful levels from N = 1 to 1024, we conclude that fixed-level optimum and uniform polar quantizations are superior t o their rectangular counter- parts at rates higher than 6.0 bits and either comparable or only slightly inferior at smaller rates. We have given the optimal division of rate between magnitude and phase angle for all useful rates up to 10 bits. We conclude that entropy coded rectangular quantizers are not only superior to the entropy coded polar ones, but also attain the performance of the polar coding bound of Pearlman and Gray [ 11. Therefore, optimum polar quantization or coding in the sense of mini- mizing distortion for a given entropy, is not likely to be rewarding given the computational complexity involved, especially since the same performance can be obtained by entropy coding the outputs of optimum or uniform rectangu- lar quantizers. Polar quantization is definitely advantageous for fixed level applications beyond rates of 6.0 bits per com- plex variable.

1.

2.

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.

14.

15.

16.

REFERENCES

W. A. Pearlman and R. M. Gray, “Source Coding of the Dis- crete Fourier Transform,” IEEE Trans. Inform. Theory, vol.

J. J. Y. Huang and P. M. Schultheiss, “Block Quantization of Correlated Gaussian Random Variables,” IEEE Trans. Commun. Syst., vol. CS-11, pp. 289-296, Sept. 1963. R. S . Powers and J . W. Goodman, “Error Rates in Computer- Generated Holographic Memories,” Applied Optics, vol. 14, pp.

N. C. Gallagher, Jr., “Quantizing Schemes for the Discrete Fourier Transform of a Random Time Series,” IEEE Trans. Inform. Theory, vol. IT-24, pp. 156-163, March 1978. A. G. Tescher, The Role of Phase in Adaptive Image Coding, Tech- nical Report 510, Image Processing Institute, Electronic Sciences Laboratory, University of Southern California, Los Angeles, CA, Dec. 1973. J . Max, “Quantizing for Minimum Distortion,” IRE Trans. Inform. Theory, vol. IT-6, pp. 7-12, March 1960. A. Gersho, “Asymptotically Optimal Block Quantization,” IEEE Trans. Inform. Theory, vol. IT-25, July 1979. G. H. Senge, Quantization of Image Transforms with Minimum Distortion, Technical Report No. ECE-77-8, Dept. of Electrical

WI, June 1977. and Computer Engineering, University of Wisconsin, Madison,

W. A. Pearlman and G . H. Senge, “Optimal Quantization of the Rayleigh Probability Distribution,” IEEE Trans. Commun., vol. COM-27, pp. 101-112, Jan. 1979. P. E. Fleischer, “Sufficient Conditions for Achieving Minimum Distortion in a Quantizer,” IEEE Znt. Conv. Rec., Part I, pp. 104-111, 1964. L. B. Lesem, P. M. Hirsch, and J . A. Jordan, Jr., “The Kinoform: A New Wavefront Reconstruction Device,” IBM J. Res. and Dev., vol. 13, pp. 150-155, March 1969. D. Kermisch, “Image Reconstruction from Phase Information Only,” J. Opt. SOC. Am., vol. 60, pp. 15-17, Jan. 1970. N. C. Gallagher and B. Liu, “Method for Computing Kinoforms that Reduces Image Reconstruction Error,” Applied Optics, vol. 12, pp. 2328-2335, Oct. 1973. P. No11 and R. Zelinski, “Bounds on Quantizer Performance in the Low Bit-Rate Region,” IEEE Trans. Commun., vol. COM-26, pp. 300-304, Feb. 1978. T. Berger, “Optimum Quantizers and Permutation Codes,” IEEE Trans. Inform. Theory, vol. IT-18, pp. 759-765, Nov. 1972. J . A. Bucklew and N. C. Gallagher, “Quantization Schemes for Bivariate Gaussian Random Variables,” IEEE Trans. Inform. Theory, to be published.

IT-24, pp. 683-692, NOV. 1978.

1690-1701, July 1975.

Channel Equalization Using Adaptive Lattice Algorithms

E. H. SATORIUS AND S. T. ALEXANDER

Absfracf-In this paper, a study of adaptive lattice algorithms as applied to channel equalization is presented. The orthogonalization properties of the lattice algorithms make them appear promising for equalizing channels which exhibit heavy amplitude distortion. Further- more, unlike the majority of other orthogonalization algorithms, the number of operations per update for the adaptive lattice equalizers is linear with respect to the number of equalizer taps.

1. INTRODUCTION

The problem of equalizing a channel whose channel- correlation matrix has a large eigenvalue spread is well known. Adaptive gradient algorithms [ 1, 21 are among the simp- lest to implement, but the rate of convergence (ROC) of these algorithms depends on the ratio of the maximum to minimum eigenvalues of the channel-correlation matrix [3 ] . Alternative algorithms which orthogonalize the above matrix have been proposed. In particular, Godard [41 through application of Kalman filter theory has derived an adaptive self-orthogonalizing algorithm which has extremely rapid convergence properties. As discussed in [ S 1, the Godard algorithm involves estimating the inverse of the channel- correlation matrix through an iterative matrix equation. Even though the Godard algorithm converges rapidly, the number of operations per update for this algorithm depends on the square of the number of equalizer taps, which creates implementation difficulties for large length equalizers. Gitlin and Magee [5] have proposed an adaptive self-orthogonalizing algorithm which provides a compromise between computational com- plexity and speed of convergence. This algorithm consists of approximating the inverse of the channel-correlation matrix by a Toeplitz matrix and involves only one matrix multipli- cation. A comparison between some of the different adaptive orthogonalizing algorithms is presented in [ 51.

A relatively new class of adaptive algorithms also provide self-orthogonalizing capabilities and only requires a number of operations per update that depends linearly on the length of the filter [ 6-91, [24]. These algorithms are called adaptive lattice (AL) algorithms and they generate a set of orthogonal signal components which can be used as inputs to equalizer gain controls. The generation of these components is done through a Gram-Schmidt type of orthogonalization. These AL algorithms have been proposed for use in such areas as speech, sonar signal processing, noise cancelling, and parameter estima- tion [6-9], [ 16-18], [24]. Furthermore, Makhoul in [ 9 ] and, independently, the authors have also realized their potential application to adaptive equalization. It is the purpose of this paper t o examine the performance of AL algorithms as applied to channel equalization.

Paper approved by the Editor for Communication Theory of the IEEE Communications Society for publication after presentation at the 12th Annual Asilomar Conference on Circuits, Systems,and Computers, Pacific Grove, CA, November 1978. Manuscript received February 9, 1978; revised February 5, 1979.

CA 92152. E. H. Satorius is with the Naval Ocean Systems Center, San Diego,

S. T. Alexander is with Collins Radio Division, Rockwell Interna- tional, Dallas, TX.

U.S. Government work not protected by U.S. copyright

Page 9: Polar Quantization of a Complex Gaussian Random Variabledownload.xuebalib.com/61xqy9NLV84s.pdf · produces the minimum distortion. We tabulate the results for numbers of “useful”outputleve1s

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具


Recommended