+ All Categories
Home > Documents > Singular value decomposition and spectral analysis

Singular value decomposition and spectral analysis

Date post: 09-Apr-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
11
SINGULAR VALUE DECOHPOSITION AND WAI 10A5 SPECTRAL ANALYSIS RAMDAS KUHARESAN and DONALD W. TUFTS Department Of Electrical Ensineering Keller Hall Universitr Of Rhode Island KinsstonrRI 02881 ABSTRACT Linear-PPediction-based (LP) methods for fittins multiple-sinusoid signal models to observed data? such as the forward-backward (FBLF) method of Nuttall (5) and Ulrrch and Clarton (6), are verr ill-conditioned. The locations of estimated spectral peaks can be sreatlr affected br a small amount of additive noise. LP estimation of freauencies can be sreatlr improved br sindular value decomposition of the LF data matrix. The improved Performance of the resultins new techniauev which we called the Principal eigenvector method (13~14) is demonstrated br usin3 it on one and two dimensional data. 1: INTRODUCTION: An important Problem in the area of spectral estimation is accurate determination of the frenuencr locations of narrow band sianals such as sinusoids in the presence of noise from aaivenblock of data. When the data record is lona? Fourier transform processing is effective. But when the record length is short, Fourier transform methods have limited freauencr resolution. Recent13 Popularized spectrum analrsis techniaues (1) based on autoregressive (AR) and autoredresive-movins average (ARMA) modellin3 seem to be more appropriate in this case. AR modellin3 is the more attractive choice mons these two, due to its computational simplicitr. However? the Performance of various AR techniaues in deterainins the freauencr locations of narrow band signals varies sisnificantlrr depending on the specific details o? the AR Parameter determination elsorithm~ For example, the so called Burs techniaue and the autocorrelation method of linear Prediction have difficulty in determiningthefreauencr of asinusoid? This research was supported by a grant the Office of Naval Research (Probability & Statistics program) surrrisindlur at hidh SNR (2~3). Whereasr a simple extension of the standard covariance method of linear Prediction (4)r introduced br Nuttall (5) and Ulrvch and Clarton (6) Performs much better on sinusoidal sidnalrr at hish SNR? with a short data record (6~7). We shall refer to this technioue as forward-backward linear Prediction (FBLF) method. In our first two reports (8~9) on this toric? we noted that the Performance of the FELF method, measured in terms of the variance of the freauencr estimates of tuo closelr spaced sinusoids in noise? was a few dB+ roorer than the Cramer-Rao (CR) bound for the freauencr Parameter (10). Whereas? the CR bound was achieved br a ma:timum likelihood technioue (899). The former finding aarees with the results of Lans and HcClellan (11). More sisnificantlr, the Performance of the FBLP method departed sharrlr for the worse (i+e. the threshold wasreached)even at relative19hishSNR values (25 to 30 dB.). Since? we have suggested improvements to the FBLP method which have significantly improved its freauencr estimation Performance (12r13r14). Our improvements stem from the followins: 1) The threshold effect in the FBLP method is due to theverv ill-conditioned nature of the least sauares problem. This can be sidnif'icantlr alleviated br modifringthe computation of the Prediction filter coefficients br usins an eisenvalue decomrosition of theestimatedcovariance matrix that occurs in the normal eauations (13) or eauivalentlr br a sinsular value decomposition (SUD) of the linear prediction eauations (14) 2) Secondlr, we use Prediction filters of larae order L (but smaller than the number of data samples N) berond the traditional limits of N/3 to N/2 (brll). This improves the resolution capabilitr of the Prediction error filter and its freauencr estimation performance. Interestinslur large values of L (for fixed N) also reduces the 1 0191-2216/81/oooO-0001 $00.75 1981 IEEE
Transcript

SINGULAR VALUE DECOHPOSITION

AND

WAI 10A5 SPECTRAL ANALYSIS

RAMDAS KUHARESAN and DONALD W . TUFTS Department O f Electrical Ensineering

Keller Hall Universitr O f Rhode Island

KinsstonrRI 02881

ABSTRACT

Linear-PPediction-based (LP) methods for fittins multiple-sinusoid signal models to observed data? such as the forward-backward (FBLF) method o f Nuttall ( 5 ) and Ulrrch and Clarton (6), are verr ill-conditioned. The locations o f estimated spectral peaks can be sreatlr affected br a small amount o f additive noise. LP estimation o f freauencies can be sreatlr improved br sindular value decomposition o f the LF data matrix. The improved Performance o f the resultins new techniauev which we called the Principal eigenvector method ( 1 3 ~ 1 4 ) is demonstrated br usin3 it on one and two dimensional data.

1: INTRODUCTION:

A n important Problem in the area of spectral estimation is accurate determination o f the frenuencr locations of narrow band sianals such as sinusoids in the presence o f noise from a aiven block o f data. When the data record is lona? Fourier transform processing is effective. But when the record length is short, Fourier transform methods have limited freauencr resolution. Recent13 Popularized spectrum analrsis techniaues (1) based on autoregressive ( A R ) and autoredresive-movins average (ARMA) modellin3 seem to be more appropriate in this case. AR modellin3 is the more attractive choice m o n s these two, due to its computational simplicitr. However? the Performance o f various AR techniaues in deterainins the freauencr locations o f narrow band signals varies sisnificantlrr depending on the specific details o? the AR Parameter determination elsorithm~ For example, the so called Burs techniaue and the autocorrelation method o f linear Prediction have difficulty in determining the freauencr o f a sinusoid? This research was suppor t ed by a grant the Office of Naval Research (Probability &

Statistics program)

surrrisindlur at hidh SNR (2~3). Whereasr a simple extension o f the standard covariance method o f linear Prediction ( 4 ) r introduced br Nuttall ( 5 ) and Ulrvch and Clarton (6) Performs much better on sinusoidal sidnalrr at hish SNR? with a short data record ( 6 ~ 7 ) . We shall refer to this technioue as forward-backward linear Prediction (FBLF) method.

In our first two reports ( 8 ~ 9 ) on this toric? we noted that the Performance o f the FELF method, measured in terms o f the variance o f the freauencr estimates of tuo closelr spaced sinusoids in noise? was a few dB+ roorer than the Cramer-Rao (CR) bound f o r the freauencr Parameter (10). Whereas? the CR bound was achieved br a ma:timum likelihood technioue ( 8 9 9 ) . The former finding aarees with the results o f Lans and HcClellan (11). More sisnificantlr, the Performance o f the FBLP method departed sharrlr f o r the worse (i+e. the threshold was reached) even at relative19 hish SNR values ( 2 5 to 30 dB.). Since? we have suggested improvements to the FBLP method which have significantly improved its freauencr estimation Performance (12r13r14).

Our improvements stem from the followins:

1) The threshold effect in the FBLP method is due to the verv ill-conditioned nature o f the least sauares problem. This can be sidnif'icantlr alleviated br modifring the computation o f the Prediction filter coefficients br usins an eisenvalue decomrosition o f the estimated covariance matrix that occurs in the normal eauations (13) or eauivalentlr br a sinsular value decomposition (SUD) o f the linear prediction eauations (14)

2 ) Secondlr, we use Prediction filters o f larae order L (but smaller than the number of data samples N) berond the traditional limits o f N/3 to N/2 (brll). This improves the resolution capabilitr o f the Prediction error filter and its freauencr estimation performance. Interestinslur large values o f L ( f o r fixed N) also reduces the

1

0191-2216/81/oooO-0001 $00.75 1981 IEEE

computational load. At the maximum allouable value of L=N-M/2 ( 1 3 ) ~ uhere ti is the number o f complex sinusoidal sisnals in the data, art interestins case, uhich Ue called the Kumaresan-Fronr (KP) case results <12r13)t At, a L value of about 3,N/4r our improvements to FBLF result in a freauencr estimator that achieves the CR bound9 over a useful ransie o f SNR values, uith short data records+

In essence, ue have modified a useful linear Prediction techniauer namelv the FBLP method, to extract close to ortitsum freauencr estimates from a divert short data record. In the next section we brieflv explain our improvements, uhich ue caled the principal eisenvector method (14). More detailed descriptions can be found in ref. (13). In section 4 ue have summarized the experimental results.

2: U N E A R PREDICTION AND SINGULAR VALUE DECOMPOSITION (SVD):

Suppose ue observe a complex valued data reauence v(n)r n=lr2rbrrN knoun to be composed o f narrow band sianals in noise. The forward-backward linear Prediction eauations ( 5 ~ 6 ) can be uritten doun b'd usins a Prediction error filter o f order L (*<N) uith impulse response si+en br the vector # ~ ( 1 , - s ( l ) , - S ( 2 ) ~ . . . . - s ( L ) ) on the data without atteartins to use data outside the diven interval of N samples as follows+

the impulse response vector o f the Prediction filter. The least SQUaP35 solution? S I P is found br minimizins the Euclidean Eorrn \\As-b\\f since the the linear srstem o f eauations is usuallr over determined. Then 2 is siven br

* -' #

"4 - LAA] A r\ uhere AA is the usual estimated correlation matrix R end &, the correlation vector , r e

This method of settins UP the linear eauations and findind the vector uhich ue call as FBLP method i s a straisht foruard extension of the standard covariance method of linear prediction ( 4 ) 9 uhich itself is a variant o f Pronr's method (18).

R

We shall nou relate the -above least sauares solution to the rinaular value decomposition (SVD) of the data matrix A +

UritinS A in terms o f its SVD ( 1 5 ~ 1 6 )

A - U S Y y 0 uhere U is a (2(N-L)x2(N-L)) unitarr matrix? V is a (L*L) unitarr matrix and S is a real 2tN-L)rL diagonal matrix uith non nesative numbers 8 , ,st, . +sLor arranded in non increasins order. The columns o f the matrices U and V are4he orthonormal eisien vectors of AA* and A A respectivelr. The diasonal elements o f S are called the sinsular values a%d are the sauare root of the eisenvalues o f A 4 (or A & + Note that AA* and &A have common nor1 zero eisenvalues. Now, in terms of the SVD o f A the least seuares solution (or the minimum norm solution dependins on uhether the linear srtem is overdetermined or underdetermined) can be written as (15~16)

1. v s u t\ x *

uhere S is the psuedoinverse of S . S has alons its diasonal the reciprocal of the non zero sinaular values o f A and the rest zeros. Then & can be .exPlicitlu uritten as a linear combination o f the eisenvectors ( o f CIA) y i p i=112... L (columns o f V ) as

% *

+ c L w Z(N.L) - LO9 Y".L) v

(columns o f U ) are the eisenvectors of Ah. If the data consisted o f tl comrlex sinusoidal sisnals onlr and no noise, then the A aatrix will have only tl non zero sinsular values and the eauation 4 uill sive the minimum norm solution (13). However, in the presence o f noise in the data I A is nrenerallr of full rank. and all the sinsular values uill be nor1 zero. Thoudh A is o f full rank, there is an underlying effective rank Q for the matrix A dependins on the number of narrow band or sinusoidal sisnals in the data, Ue decide on the value o f Q lookind at the size of the singular values. Sinsular values are knoun to be very useful in determinins the numerical rank o f a matrix (16). Then ue set the sindular values smaller than s9 to zero, effectivelr removing larse perturbations introduced into the s vector by the ei3envectors uith subscripts Qtl to 2(N-L), Thus our new vector I now called seis

_sQ = v q U'b @ uhere 5 has alons its diasonals l/sv l / s p ~ ~ + l / ~ O ~ O . . ~ O ~ Thereforer S can be uritten explicitlr as

aq= C,Y, + CA -+ - - . + %% 0 We call this method of cornputins the Prediction coefficients as Principal eisenvectors (PE) method since the coefficients are formed as linear combination o f .the Q principal eifenvectors of &A, The above step is eauivalent to appvoxitnatins the matrix A br a matrix A of rank Q in the least sauare sense (16) and usins its SVD in eauations 6 and 7 .

2

Incidentallrr it should be noted that in the noise less data case? with M sinusoidsr Q will be eaual to M P since only M singular values will be non zero, and in this case the freauencies o f the sinusoids can be determined exactlr from the prediction-error filter coefficients, The above discussion was Preset@ed in ref. ( 1 3 ) interms o f the matrices A 4 and A#and their eisenvalues. This step of rerlacina A br hqcan be viewed as imrrovind the SNR in the data br usins prior information reaardins the nature o f the sidnal in the data. This is because the Q principal eidenvectors are considerablr bore robust to noise perturbations than the restt Especiallr when the sidnal is a sum of sinusoids the eiaenvectors in the null space (with noise less data) of 6% are verr succertible to even small noise Perturbations. This is due correspondins eiaenvalues of % ?: the noise less case) beins eawal (zero) ( 1 7 ) . From the numerical analrsts Point of vieu (15rlb)r the truncation of the sinSular values at sg, is a war to alleviate extreme ill-conditionins caused by the close lependencr o f the columns of A in emuation

A related issue of interest in spectral analrsis is to determine the order of the sidnalr i t e + the number o f narrow band comronents or sinusoids in the data. In fact this is often a necessary step before attemptin3 to find the location of the spectral peaks The madnitude o f the sin9ular values dives an indication o f the effective rank of A or the rank of the underlrins 'sisinal onlr' data matrix f o r reasonable SNR values. Thus the estimate o f the number of sisnals in the data is obtained a5 a br Product in the PE method. If Q is chosen eaual to Mvthe number of sinusoids in the data, best results are obtained usins the PE method. The sensitivitr o f the freauencr estimates to the choice of Q are examined in section 4 ,

3: FINDING THE 'BEST' PREDICTION FILTER ORDER-L:

The next issue is in choosina the value o f L f o r a diven NI the number of data samples. L has to be sreater than or eaual to M I the number of complex sinusoids in the data. If L-M then the FBLP method is B simrle variant o f Pronr's coset For L>M, hut L<N/2r the FBLP method Performs satisfactorilr at very hiah SNR (9~11). But? as L is Pushed closer to N/2 in the FBLP eethodr there are a number o f singular values snt, I s,,,,~. + + e which are small, due to the close derendencr o f columns of A . These smaller singular values inverselr weisht the corresrondind eisenvectors as seen in eauation J. These eisenvectors are wildly fluctuatins from record to record in comparison to the first M PPinciPal eiaenvectors uhich are relativelr stablet

This causes considerable fluctuations in S and thus results spurious freauencr/sPectral estimates. But these fluctuations are eliminated in g~ as a result o f the truncation of the sinsular values at s + The presence o f such fluctuatina eisenvectors in 1 makes the FBLP method virtuallr useless f o r L values berond N/2+ Thus most users o f the FBLP method were sugsestina a Practical limit to L as somewhere between N / 3 and N/2 (6,ll)r In contrast? the PE method in which S is computed as in eauation 6r the performance (measured in terms of freauencr estimation accuracr) improves steadilr upto a L value of about 3N/4 ( 1 3 ) ~ berond which there is considerable instabilitr even in the Q principal eisenvectorsr end hence the Performance starts to fall o f f . As L is increased to a maximum value o f L=N-M/2 (berond which freauencies of even noise less sinusoids can not be found usin3 the FBLP method (13))rr the number of eauations in 1 is eaual to the number of sinusoids in the data. The rank of A is also M ( f o r N)2M) end A has onlr M non zero sinsular valuest Therefore g,would be the same as 9 (minimum

all zeroI Thus the additional eiaenvectors norm solution), since c,,,, c,.,,~' t would be

are all automaticallr eliminated from 9.

without SVD as Interestinalr, the $=zq can be calculated

Q ~ = A*(AA*Y' 1 @ We called this case the Kumaresan-Pronr's (KP) case ( 1 2 ~ 1 3 ) ~ This case is comutationallr simple and its Performance

at L=N/3r the so called optimum L f o r FBLP is essentiallr the same as the FBLP method

(11) as can be seen from the experimental results in the next section,

4: W F R I M F N T A L RESULTS:

4 a: performance Comparison o? FBLP and PE Methods:

Here we Present some experimental results to demonstrate the imProvements which we discussed above. The data is simulated usina the formule

uhere $, = 14, &=O, f, =.52Hz, P fi =+5Hz+ v and u(n) are independent comp1e:t Gaussian noise samples with variance 6% for each real and imasinarr part, SNR is defined as 10 Loa(l/2d). M I the number o f complex sinusoids is eaual to 2 . The samplina period is assumed to be 1 Hzt For each trialr a data block of 25 (N1.25) data samrlesr is used. The freauencr estimates are obtained br findina the two roots of the Prediction-error filter transfer function that are closest to the unit circle. The anslle of those two roots are taken as

3

freauencr estimates. Different values of L in the renae of 2 to 24 are used. The two methods used are the FBLP method discussed in ( 5 ~ 6 ~ 1 1 ) and the Principal eigenvectors (FE) method (13~14) urins SVD discussed above, The standard deviation o f the freauencr estimation error ( f o r f,) is computed for SO0 independent trials, The estimated standard deviations are tabulated in table 1 f o r di?ferent SNR and L values. The corresrondina CR bound values (10) are also Siven. The estimation bias was neSli3ible in all cases exc#pt at 7dB. The biases at 768. t o r the three L values of 14r16r18 were about a third of the

point in table 1 is that the FBLP method is respective standard deviations. The main

Primarilr useful onlv at very hidh SNR values whereas the PE method can be used at much lower SNR values. Also br choosing the L value to be about 3N/4 in PE, uhich is n o t useful in FBLF? the FE method practicallv achieves the CR bound. The special case o f L=2 is a v'ariarlt of Fronu's method (18) with the data being used in both the forward and backward directions.

L=24 corresponds t o the KF case. It has su~erior Pertormanee at lower SNR values compared to the FBLP method at conventional values of L. For the two. special cases o f L=2 and 24 the PE and FBLF methods coincider since the rank. o? the data matrix A is eaual to 2 and it has onlr two non zero sinsiular values, Fidure 1 shows Quite dramatically the ill-conditioning o f the FBLF method in conparison to the PE method in terms of the Prediction-error filter zeros. Fi%ure 3 converr, essentiallu the same information as in table 1 but as a continuous function o f the SNR. Figure 8 mhous the performance o f PE method tor a c o w l e x sinusoid.

4 b: $ t the shoice o f Q v th- Tr-atlan w:

. . I

Ue shall now demonstrate the relative insensitivitr o f the freauencr estimates to the choice ot Q r the number o f principal eiaenvrctors included in the Prediction filter coarutation. The important point is that Q should be greater than or eaual t o M but not too much larder than H. The same data set as in eauation 1 is used,. SNR is 10 dB+ L is 18, For a Particular data block o f 25 data sam~les the Sun o f the 2 (N-L)XL data matrix showed tuo relative19 large singular values, 4.83 and 3.80, and the rest uere smaller than 0.95, Hence it was easy

ClOsel% spaced sinusoidal sianals the t o choose the value of Q as 2 . For more

maanitude o f the second sinsualr value would be smaller makina the choice o? Q more difficult +

Ue computed the rrediction filter coef?icients usins SVD f o r various assumed values o f a bu settins the rest o f the sinsiular Values to zerov as said before.

For each value o f Q I the correspondin% 's~ectral estimate' #(f) defined as

i s comruted and Plotted in firures 2 alb,C?d+ Figure 2 e shows the case o? the minimus norm solution when none of the singular values are set zero. For Q=1 the tu0 sinusoids are not resolved. At least tuo sindular v p e s and the corres~ondins eigenvectors of A are needed t o resolve the tu0 6inusOjdS. But the rest of the spectral Peaks are Quite damped. Q=2 corresponds to the ideal situation in uhich the uorkins huPothesi5 o f two si3nals is correct. For QP3 and 4 the noise subspace perturbations start to introduce instab4lities into the Prediction coefficientsr sllthtlr affectan3 the extraneous spectral peaks. occasion all^^ when the noise realization itself io Close to a sinusoid o? some freauency, one night see a large third peak f o r the case o f Q larger than 2 at a low SNR, Fiaure 2er correspondins to the minimum norm solution, shous large srurious peaks which exhibit the ill-conditioned nature of the Problem.

4 c: Fffect n? Chanaing the Relative Phase -:

In figure 4, the effect o f chartsind the initial phase dif?erence between the s i n l ~ s o i d s ora the freauencr estimation performance and the CR bound is studied. The estimation Performance of the FE method at L=18 closely tracks the CR bound.

4 d: 'Best' L ?or the FE nethod: In figure 5 the 'best' value o f L f o r a

fixed N=25 is found to be about 1% (cr;jN/4) as discussed in section 3.

4 e: Performance o f the PE Method on Cadzow's Data:

FiSure 6a and 6 b show the Performance of the PE method on data consistin3 o? two real sinusoids and noise, This data set, due to Cadzow (19)?(20) was Provided to UP bu S,M.Kar, The beet value o f l;s3N/4=48r (since N=64) was chosen and S ( f ) was computed and i s shown in fis.6 a + The matrix A had tour large sinaular values 86.26r82.52r23.22 and 21.SJ since the data had two real (or four complex) sinusoidsr The rest of the sinrular values uere less than 8.17, Thus 0 uas chosen as 4 . Fasure 6 b shows the KF case with L=N-4/2=62. This case invoved inversion of only 4 ~ 4 matrix AA* (see eauation 8 ) .

4 f: FE method on 2-D data:

In fisure 7a and 7br the the OFT and FE method on a consisting o f two closely

Performance of 2-D data arraw spaced (in

4

freauencr and wavenumber) riirnals is shown. The details o f the experiments are siven in

i:filG is shown in ?iSure 7 b and w 8 S

computed usins the formula

The 'spectral estimate'

where H,and Hzare two auarter plane filters usin0 the whole data array as their SUP POP^ (21) . REFERENCES:

1) D.G.Childersr Editor? Modern Spectrum Analrsisr IEEE pressr Neu Yorkr NYr1978r 2) W.Y*Chen and G+R,SteSenr mExperiaents uith maximum entropy Power spectra of sinusoids'r J. o f Cieophr. Res.? Vo1.74r

3) SIMIKar and L.Marpler 'Sources and Remedies f o r srecral line spliting in Autoresressive Spectrum AnalYsis9- Proce o f ICASSP 1979, Washinston D,C,r ~ ~ + 1 5 1 - 1 5 4 . 4) J+Makhoul, 'Linear Prediction: A tutorial revieur' Proc, o f the IEEET Vo1.63~ rrt561-580r Arril 1975. 9 ) A,H.Nuttallr 'Spectral analysis of a univariate process with bad data points via maximum entropr and linear Predictive techrtiaues'r in NUSC scientifec and erdineerins studiesr Spectral Estimationr NUSCr NeuLondonr CTIr March 1976. 6) T.JIUlrrch and R*W,Clarton 'Time series modellins and maximum entroprr- Physics o f the earth and planetaru interiorsr Uol+l2r

7) D+N.Swinglerr * A comrarison between Burg's maximum entropy method and a non recursive techniaue f o r the spectral analrsis o f deterministic sisnals' J t o f Geophr. Res.9 Uo1.84~ ~r.679-685~ Feb.1979. 8 ) D.W+Tufts and RIKumaresanr -Improved Spectral Resolution'r Proc. Lett,, Proc. o f IEEE, Vol 68f No.3~ ~ ~ + 4 1 9 - 4 2 0 ~ March 1980, 9) D*W.Tufts and ReKumaresant 'Improved Spectral Resolution 1 1 ' ~ Proc.of ICASSP 19809 Arril 1980, PPI 592-597. 10) D.C+Rife and RIRIBoorstrn,'Multiple tone parameter estimation f rom discrete time observations'r E I S + T I J I ~ ~ ~ ~ 1 3 8 9 - 1 4 1 0 1

11) StWILans and JIHIMclellan 'Freauencs estimation uith maximum entropy spectral estimators- IEEE Trans. on ASSPI U01.28~ No.61 Dec.1980r pp.716-724. 12) R.Kumaresan and D.W.Tufts 'Improved spectral resolution 111: Efficient realizationr' Proc. Letter Proc. of the

13) D.W.Tufts end RoKuaaresanr 'Freauencw Estimation of Multiple Sinusoids: Hakina Linear Prediction Perform Like Haxiaturn Likelihood?' Submitted f o r Publication to IEEE Trans+ on ASSPr MarchrSr 1981,

No1209 Julr 1974.

~ ~ + 1 8 8 - 2 0 0 ~ August 1976.

N0V+1976*

IEEET ~01.68~ ~ o . 1 0 ~ oct.19aot

14) D.W,Tuftr and R.Kumaresan? mSinduler Value Decompositon and Freauencv Estimation by Linear Predictionr' submitted to IEEE Trans. on ASSP for Publication, 15) C.L.Lauson and RIJeHensonr Solvins Least Sauares Problems? Prentice-Hellr Enalewood

16) V.C.Klemrna and A.J.Laubr 'The Singular Value Decomposition: Its Computation and

Controlr vol. AC-259 ~ ~ + 1 6 4 - 1 7 6 ~ A~r.1980. Some Arplicetionsr' IEEE Trans. Automatic

17) J+H.Wilkinsonr The Alsebraic Eisenvalue Problem, Clrendon Pressr Oxford? U.K+T 19659 18) F.E.Hildebrand9 Introduction to Numerical Analrsisr McGraw-Hill, NewYork P 1956 19) T+M+Sullivanr 0.L.Frost and J.R.Treichler, 'High resolution signal estimation- ARGO Systemsr 1nc.r Tech. RePt.r June 1978t 20) JIAtCadzou 'Hiah Performance spectral estimation - A neu ARMA methodr' IEEE Trans. on ASSP, Volr28r No.5, October

21) R.Kumaresan and D+W.Tuftsr ' A Two Dimensional Techniaue f o r Freauencr-Wavenumber Estimationr' submitted

Cliffs, NJerl974.

1980rr~*524-529+

f o r Publication to Proc. Lett. o f the Proc. o f the IEEE.

5

L

, 5 2 9 ~ 1 6 ~

. I 1 2 ~ 1 0 - ~

.694~10-~

.513~10-~

.4@x 10 -5

.426~10-~

.347~10-~

.400~10-~

.514~10-~

t 7 I

.670~10-~

hi

- Figure 2 : S e n s i t i v i t y of t h e S(f) and hence the

frequency estimates t o the choice of Q , the

assumed number of s i g n a l s i n t h e d a t a , SNR:lOdB.

ALL EIGENVECTORS ARE

e.e F R E W E N C Y

7

30

20

IO -5 0 0 10 20 szd B

SNR (INPUT>

Figure 3: Performance of the PE and FBLP methods as a continuous function of SNR. Same data sets as in

Table 1 are used.

Figure 4 : Performance of the PE method as a function of the initial phase difference A$. The data y(n) : exp()(mf,n+c$,))+ exp(j(2nf2n + $ 2 ) ) + w(n). n: O,1,..24 f =0.52Hz,

2 f2=O.5Hz, A $ Z $ ~ - $ ~ , $ =O. 1

CR BOUND

Figure 5: Performance of the PE method as a function of the prediction filter order L. Same data as in Figure 4 but with A+ fixed at a/4.

0 . 8 FREQUENCY

6.b

8.5089

- le .0080 0 . 8 FREQUENCY 8 . 5 8 8 0

9

Fim sinu + 2il

.re 7a: soidal x 0.2

Discrete Fourier Transform (magnitude) of a 2-Dimensional (2-D) 10 x 10 data array consisting of signlas with closely spaced frequenciej and wavenumbers. The data array. y(n,m) = exp (j(2~r x 0. 4m)) + exp (j(2~r x O.23n + 2 ~ r x 0.27m)) + W(n,m), n,m = 0,1,,.9. SNR 2OdB.

two 24n

Fi@ usir

Ire 7b: ig the

Plot of 10 log ;( eJ', e'") for the above data set. ;( eJ', eJv) is computed as described in text KP case appropriately modified for 2-D data. Only a portion of the unit sphere is plotted.

10

80] FREQUENCY ESTIMATION OF A COMPLEX SINUSOID

30 '

20 *

Figure 8: Performance of the PE method i n comparison with the CR bound. Note t h a t t h e threshold occurs a t much lower SNR compared t o t h a t i n F i g u r e 3.

11


Recommended