+ All Categories
Home > Documents > Adaptive algorithm for direct frequency estimation

Adaptive algorithm for direct frequency estimation

Date post: 21-Sep-2016
Category:
Upload: pc
View: 216 times
Download: 2 times
Share this document with a friend
6
Adaptive algorithm for direct frequency estimation H.C. So and P.C. Ching Abstract: Based on the linear prediction property of sinusoidal signals, a new adaptive method is proposed for frequency estimation of a real tone in white noise. Using the least mean square approach, the estimator is computationally efficient and it provides unbiased and direct frequency measurements on a sample-by-sample basis. Convergence behaviour of the estimated frequency is analysed and its variance in white Gaussian noise is derived. Computer simulations are included to corroborate the theoretical analysis and to show its comparative performance with two adaptive frequency estimators in non-stationary environments. 1 Introduction Estimating the frequency of sinusoidal signals in noise has applications in many areas [1–3] such as carrier and clock synchronisation, angle of arrival estimation, demodulation of frequency-shift keying (FSK) signals, and Doppler estimation of radar and sonar wave returns. In this work, we consider single real tone frequency estimation in white noise. The discrete-time noisy sinusoid is modelled as x n ¼ a cosðon þ fÞþ q n ¼ D s n þ q n ð1Þ where the noise q n is assumed to be a white zero-mean random process with unknown noise power s 2 q while a; o and f 0; 2pÞ which represent the tone amplitude, frequency and phase of the sinusoid, respectively, are unknown. Without loss of generality, the sampling period is assigned to be 1 s. The task here is to find o 0; pÞ from x n : If the sinusoidal parameters are constant in time, classical batch techniques [2, 3] include maximum likelihood estimation [4] and eigenanalysis algorithms such as Pisarenko’s harmonic retrieval method [5] and MUSIC [6], can be employed to achieve accurate frequency estimation. On the other hand, when the environment is non-stationary, such as the frequency is an abruptly changing function of time and the amplitude=phase is time-varying, tracking of o is necessary. Griffiths [7] was the first to formulate the adaptive frequency estimation problem and Thompson [8] was the first to propose a constrained least mean square (LMS) algorithm [9] to obtain an unbiased estimate of the sinusoidal frequency, which can be considered as an online implementation of Pisarenko’s method. The key idea of the non-stationary frequency estimation method suggested by Etter and Hush [10] is to maximise the mean square difference between x n and its delayed version using an adaptive time delay estimator (ATDE) [11] and the frequency estimate is given by p over the estimated delay. Since the delay of the ATDE is restricted to be an integral multiple of the sampling interval, the algorithm cannot give accurate frequency estimation, particularly for large o: An improvement to [10] was made by providing fractional sample delays in the ATDE with the use of Lagrange interpolation [12]. However, the frequency estimate of the modified method is still biased because the Lagrange interpolator cannot perfectly model subsample delays for sinusoidal signals. Generally, finite length fractional delay filters are never ideal for non-integer delays [13, 14]. Other recent adaptive frequency estimators include constrained pole –zero notch filtering [15], Pisarenko’s method combined with Kamen’s pole factorisation [16] and adaptive IIR-BPF [17], which is an LMS-style linear prediction algorithm with an IIR band-pass filter for noise reduction. In this paper, a new adaptive frequency estimator in white noise is proposed [18] based on linear prediction of sinusoidal signals. The main scientific advance of the work is to minimize the mean square value of a modified linear prediction error function, which is characterised by the estimated frequency only and its minimum exactly corresponds to the sinusoidal frequency. As a result, direct and unbiased frequency estimation is achieved. The proposed approach can be considered as a specific application of the unbiased impulse response estimation algorithm [19, 20] derived from minimising the mean square value of the equation error under a constant norm constraint. Starting from the property that a pure sinusoid is predictable from its past two sampled values, the modified linear prediction error function is developed. The LMS algorithm is then applied to minimise the cost function and the frequency estimate is updated explicitly on a sample-by-sample basis. Performance measures of the adaptive estimator, namely convergence behaviour and variance of the estimated frequency, are also analysed. It is noteworthy that the proposed frequency estimation frame- work has been extended to least squares type realisations, which provide higher estimation accuracy at the expense of larger computational requirement, and interested readers are referred to [21–24]. Simulation results are presented to corroborate the theoretical analyses and to illustrate the superiority of the proposed frequency estimation algorithm over the adaptive Pisarenko’s algorithm [8] and adaptive IIR-BPF [17]. q IEE, 2004 IEE Proceedings online no. 20041001 doi: 10.1049/ip-rsn:20041001 H.C. So is with the Department of Computer Engineering & Information Technology, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong P.C. Ching is with the Department of Electronic Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong Paper first received 13th June 2003 and in revised form 14th June 2004. Originally published online 29th November 2004 IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004 359
Transcript

Adaptive algorithm for direct frequency estimation

H.C. So and P.C. Ching

Abstract: Based on the linear prediction property of sinusoidal signals, a new adaptive method isproposed for frequency estimation of a real tone in white noise. Using the least mean squareapproach, the estimator is computationally efficient and it provides unbiased and direct frequencymeasurements on a sample-by-sample basis. Convergence behaviour of the estimated frequency isanalysed and its variance in white Gaussian noise is derived. Computer simulations are included tocorroborate the theoretical analysis and to show its comparative performance with two adaptivefrequency estimators in non-stationary environments.

1 Introduction

Estimating the frequency of sinusoidal signals in noisehas applications in many areas [1–3] such as carrier andclock synchronisation, angle of arrival estimation,demodulation of frequency-shift keying (FSK) signals,and Doppler estimation of radar and sonar wave returns.In this work, we consider single real tone frequencyestimation in white noise. The discrete-time noisysinusoid is modelled as

xn ¼ a cosðon þ fÞ þ qn ¼D sn þ qn ð1Þ

where the noise qn is assumed to be a white zero-meanrandom process with unknown noise power s2

q while a; oand f 2 ½0; 2pÞ which represent the tone amplitude,frequency and phase of the sinusoid, respectively, areunknown. Without loss of generality, the sampling periodis assigned to be 1 s. The task here is to find o 2 ð0; pÞfrom xn:

If the sinusoidal parameters are constant in time, classicalbatch techniques [2, 3] include maximum likelihoodestimation [4] and eigenanalysis algorithms such asPisarenko’s harmonic retrieval method [5] and MUSIC[6], can be employed to achieve accurate frequencyestimation. On the other hand, when the environment isnon-stationary, such as the frequency is an abruptlychanging function of time and the amplitude=phase istime-varying, tracking of o is necessary. Griffiths [7] wasthe first to formulate the adaptive frequency estimationproblem and Thompson [8] was the first to propose aconstrained least mean square (LMS) algorithm [9] to obtainan unbiased estimate of the sinusoidal frequency, which canbe considered as an online implementation of Pisarenko’smethod. The key idea of the non-stationary frequencyestimation method suggested by Etter and Hush [10] is to

maximise the mean square difference between xn and itsdelayed version using an adaptive time delay estimator(ATDE) [11] and the frequency estimate is given by p overthe estimated delay. Since the delay of the ATDE isrestricted to be an integral multiple of the sampling interval,the algorithm cannot give accurate frequency estimation,particularly for large o: An improvement to [10] was madeby providing fractional sample delays in the ATDE with theuse of Lagrange interpolation [12]. However, the frequencyestimate of the modified method is still biased because theLagrange interpolator cannot perfectly model subsampledelays for sinusoidal signals. Generally, finite lengthfractional delay filters are never ideal for non-integer delays[13, 14]. Other recent adaptive frequency estimators includeconstrained pole–zero notch filtering [15], Pisarenko’smethod combined with Kamen’s pole factorisation [16]and adaptive IIR-BPF [17], which is an LMS-style linearprediction algorithm with an IIR band-pass filter for noisereduction.

In this paper, a new adaptive frequency estimator in whitenoise is proposed [18] based on linear prediction ofsinusoidal signals. The main scientific advance of thework is to minimize the mean square value of a modifiedlinear prediction error function, which is characterised bythe estimated frequency only and its minimum exactlycorresponds to the sinusoidal frequency. As a result, directand unbiased frequency estimation is achieved. Theproposed approach can be considered as a specificapplication of the unbiased impulse response estimationalgorithm [19, 20] derived from minimising the meansquare value of the equation error under a constant normconstraint. Starting from the property that a pure sinusoid ispredictable from its past two sampled values, the modifiedlinear prediction error function is developed. The LMSalgorithm is then applied to minimise the cost functionand the frequency estimate is updated explicitly on asample-by-sample basis. Performance measures of theadaptive estimator, namely convergence behaviour andvariance of the estimated frequency, are also analysed. It isnoteworthy that the proposed frequency estimation frame-work has been extended to least squares type realisations,which provide higher estimation accuracy at the expense oflarger computational requirement, and interested readers arereferred to [21–24]. Simulation results are presented tocorroborate the theoretical analyses and to illustrate thesuperiority of the proposed frequency estimation algorithmover the adaptive Pisarenko’s algorithm [8] and adaptiveIIR-BPF [17].

q IEE, 2004

IEE Proceedings online no. 20041001

doi: 10.1049/ip-rsn:20041001

H.C. So is with the Department of Computer Engineering & InformationTechnology, City University of Hong Kong, Tat Chee Avenue, Kowloon,Hong Kong

P.C. Ching is with the Department of Electronic Engineering, The ChineseUniversity of Hong Kong, Shatin, N.T., Hong Kong

Paper first received 13th June 2003 and in revised form 14th June 2004.Originally published online 29th November 2004

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004 359

2 Direct frequency estimator (DFE)

It is easy to verify that sn obeys the following simplerecurrence relation [25]:

sn ¼ 2 cosðoÞsn�1 � sn�2 ð2Þ

With the measurement fxng; we can predict sn using

ssn ¼ 2 cosðooÞxn�1 � xn�2 ð3Þ

where oo represents an estimate of o: Defining the linearprediction error function as

en ¼D xn � ssn ð4Þ

It can be shown that the mean square error function Efe2ng is

E e2n

� �¼ 4½cosðooÞ � cosðoÞ2s2

s þ 2½2 þ cosð2ooÞs2q ð5Þ

where s2s ¼ a2=2 denotes the tone power. Apparently,

minimising Efe2ng with respect to oo will not give the

desired solution because of the noise component. When thevalue of s2

q is available, then unbiased frequency estimation[26, 27] can be attained with the use of Efe2

ng: On the otherhand, without knowing the noise power, unbiased frequencyestimates can still be obtained via minimising (5) subject tothe constraint 2 þ cosð2ooÞ is a constant. This constrainedoptimisation problem is in fact equivalent to the uncon-strained minimisation of a scaled version of Efe2

ng [20],denoted by Efz2

ng; which has the form

E z2n

� �¼

E e2n

� �2½2 þ cosð2ooÞ ¼

2½cosðooÞ � cosðoÞ2s2s

2 þ cosð2ooÞ þ s2q

ð6Þ

It is worth noting that (6) can be considered as an alternativeform of the modified mean square error suggested in [28]but there was no theoretical analysis of their frequencyestimator. The advantages of using (6) are that we canobtain direct frequency measurements and derive theestimator performance in a simpler way. Investigating thefirst and second derivatives of (6) shows that for o 2 ð0; pÞ;the performance surface Efz2

ng has a unique minimum atoo ¼ o with the value of s2

q; but it also has a maximum whenoo< p=3 or oo>2p=3: This suggests that minimisation ofEfz2

ng can be achieved via gradient search methods if theinitial value of oo is chosen between p=3 and 2p=3: In thiswork, the computationally attractive LMS algorithm isutilised to estimate o iteratively. From (6), the instan-taneous value of Efz2

ng; z2n; is

z2n ¼ e2

n

2½2 þ cosð2oonÞð7Þ

where oon denotes the estimate of o at time n. Note that z2n is

in fact an estimate of s2q as oo ! o: The stochastic gradient

estimate is computed by differentiating z2n with respect to oon

and is given by

@z2n

@oon

¼ 2 sinðoonÞ½2 þ cosð2oonÞ2

en½ðxn þ xn�2Þ cosðoonÞ þ xn�1

ð8Þ

Since the term 2 sinðoonÞ=½2 þ cosð2oonÞ2 is positive foroon 2 ð0; pÞ; it does not affect the sign of the gradientestimate. As a result, the LMS updating equation for thedirect frequency estimator (DFE) can be simplified as

oonþ1 ¼ oon � men½ðxn þ xn�2Þ cosðoonÞ þ xn�1 ð9Þ

where m is the step size of the adaptive algorithm. Toreduce computation, the value of the cosine functionis retrieved from a pre-stored cosine vector of the form½1 cosðp=LÞ � � � cosðpðL � 1Þ=LÞ where L is the vectorlength. Notice that when L increases, the frequencyresolution increases but a larger memory will be needed.As a result, the method requires only five multiplications,five additions and one look-up operation for each samplinginterval. Comparing with its recursive least squares (RLS)realisation [23] which involves eight additions, ninemultiplications, one division, one square root and onearccosine operation per iteration, the LMS implementationis more computationally simple, but at the expense oflarger variance for the frequency estimate. It is noteworthythat the RLS algorithm can also be derived from a totalleast squares minimisation framework [24], and itsfrequency variance decreases linearly and quadraticallywith the length of fxng at low and high signal-to-noiseratio (SNR) conditions, respectively.

Prior to deriving the convergence behaviour of thefrequency estimate, we evaluate the expected value of thelearning increment of (9):

Efen½ðxn þ xn�2Þ cosðoonÞ þ xn�1g¼ Ef½sn � 2sn�1 cosðoonÞ þ sn�2 þ qn

� 2 cosðoonÞqn�1 þ qn�2� ½ðsn þ sn�2 þ qn þ qn�2Þ cosðoonÞ þ sn�1 þ qn�1g

¼ s2s ½ð1 þ cosð2oÞÞ þ cosðoonÞ þ cosðoÞ � ð2 cosðoÞþ 2 cosðoÞÞ cos2ðoonÞ � 2 cosðoonÞþ ðcosð2oÞ þ 1Þ cosðoonÞ þ cosðoÞ þ s2

q½cosðoonÞ� 2 cosðoonÞ þ cosðoonÞ

¼ 2s2s ½cosðoonÞ cosð2oÞ � cosðoÞ cosð2oonÞ

ð10Þ

Obviously, oon ¼ o is a stationary point of (10). Moreover,the derivative of (10) with respect to oon at oon ¼ o is easilyshown to be

@2s2s ½cosðoonÞ cosð2oÞ � cosðoÞ cosð2oonÞ

@oon

����oon¼o

¼ 2s2s sinðoÞð2 cos2ðoÞ þ 1Þ ð11Þ

which is always positive for o 2 ð0; pÞ: As a result, the localstability of the algorithm is proved [29]. Substituting (10)into (9), we obtain the mean convergence trajectory of thefrequency estimate as

oonþ1 ¼ oon � 2ms2s ½cosðoonÞ cosð2oÞ � cosðoÞ cosð2oonÞ

¼ oon � 2ms2s

�sin

oon � o2

� �sin

3ðoon þ oÞ2

� �

þ sinoon þ o

2

� �sin

3ðoon � oÞ2

� ��

ð12Þ

Considering local convergence when oon approaches o; (12)can be approximated as

oonþ1 oon � ms2s ðoon � oÞ

��sin

3ðoon þ oÞ2

� �þ 3 sin

oon þ o2

� ��

¼ oonð1 � ms2s gðoonÞÞ þ ms2

sogðoonÞ ð13Þ

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004360

where

gðoonÞ ¼Dsin

3ðoon þ oÞ2

� �þ 3 sin

oon þ o2

� �ð14Þ

A closed form expression for oon is not available because thegeometric ratio ð1 � ms2

s gðoonÞÞ is changing at eachiteration, but the convergence trajectory can be easilyacquired using (13) by brute force. Nevertheless, someobservation can be made from (13). First, the meanconvergence rate of oon is independent of the noise level.To ensure convergence and stability, m should be chosen so

that j1 � ms2s gðoonÞj< 1 is satisfied. Since 0< gðoonÞ< 4;

the bound for m can thus be computed from j1 � 4ms2s j< 1;

which gives 0< m< 1=ð2s2s Þ: In addition, the algorithm has

a time-varying time constant of 1=ð2ms2s gðoonÞÞ: Consider-

ing when oon ! o; it is found that gðoonÞ will approach zeroat o ¼ 0 or o ¼ p; which implies that the steady statelearning rate of oon is fairly slow if the frequency is close toone of these extreme values.

Assuming that qn is of Gaussian distribution and using (9)again, the steady state frequency variance of the DFEalgorithm, denoted by varðooÞ; is derived as (see theAppendix)

varðooÞ ¼D limn!1

Efðoon � oÞ2g

ms2

q

2SNR sinðoÞcosð4oÞ

2 þ cosð2oÞ þ 1

� �ð15Þ

where SNR ¼ s2s=s

2q: It can be seen that varðooÞ is

proportional to m and s2q and inversely proportional to

SNR. Investigating the term ðcosð4oÞ=ð2 þ cosð2oÞÞ þ1Þ= sinðoÞ reveals that the frequency variance approachesits minimum value of 0:32ms2

q=SNR if o 0:28p or o 0:72p while it has a large value when o is close to 0 or p:At o ¼ 0 and o ¼ p; varðooÞ ! 1; while the variance ofoon has zero value in the absence of noise. From (13) and(15), the choice of m should be a trade-off between a fastconvergence rate and a small variance, as in the standardLMS algorithm [9]. We also note that the estimationperformance of the DFE is relatively poor when o is close to0 or p because both the convergence time and variance arelarge.

On the other hand, the steady state variance of the noisepower estimate using (7) is given by

varðz2Þ ¼D limn!1

Efðz2n � s2

qÞ2g 2s4q ð16Þ

It is interesting to note that varðz2Þ does not depend onm and o:

3 Simulation results

Computer simulations had been conducted to evaluate thesinusoidal frequency estimation performance of the DFE inthe presence of white Gaussian noise for non-stationaryconditions. Comparisons with two LMS-style frequencyestimators which are claimed to provide unbiased esti-mation, namely, the adaptive Pisarenko’s algorithm [8] andadaptive IIR-BPF [17] were also made. For each iteration,the former requires ten multiplications, four divisions, fiveadditions, one look-up and one square root operation whilethe latter needs six multiplications, six additions and onelook-up operation. The signal power was unity, whichcorresponded to a ¼

ffiffiffi2

pand we scaled the noise sequence

to obtain different SNRs. The length of the cosine vector Lwas chosen to be 1000 and this provided a frequency

resolution of p=1000 rad=s: The initial frequency estimatesof all three methods were set to be 0:5p rad=s and the 3 dBbandwidth coefficient in [17] was b ¼ 0:5: All the resultswere based on 100 independent runs.

In the first example, o was a piecewise constant functionand the SNR was 10 dB. The actual frequency had a value of0:95p rad=s during the first 4000 iterations and then changedinstantaneously to 0:55p rad=s and to 0:3p rad=s; at the4000th and the 8000th iteration, respectively. The step sizeparameters of the DFE and adaptive Pisarenko’s algorithmwere chosen to be 0.002 while that of the adaptive IIR-BPFwas 0.005. Figure 1 shows the trajectories for the frequencyestimates of the three algorithms in tracking this time-varying frequency. It can be seen that oon converged tothe desired values at approximately the 2000th, 5300thand 9000th iterations. The convergence time foro ¼ 0:95p rad=s almost doubled that of o ¼ 0:3p rad=sbecause upon convergence, the term sinð3ðoon þ oÞ=2Þ þ3 sinððoon þ oÞ=2Þ approached 0.16 and 3.4 in the formerand the latter case, respectively. In addition, we observe that(13) had predicted the learning behaviour of the frequencyestimate accurately. On the other hand, the adaptivePisarenko’s algorithm also estimated the step-changingfrequency accurately but with different convergencebehaviors, while the adaptive IIR-BPF was incapable oftracking the true frequency after the 4000th iteration.Figure 2 plots a time evolution of xn to quantify visually thedifficulty of accurate frequency estimation.

As mentioned in Section 2, the modified cost functionEfz2

ng has two maxima in ð0; p=3Þ and ð2p=3; pÞ; which

Fig. 2 Time segment of x(n)

Fig. 1 Frequency estimates for step changes in frequency atSNR ¼ 10 dB

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004 361

implies that the algorithm cannot achieve global conver-gence when there is a very large change in frequencyabruptly. Nevertheless, in this case the frequency estimatewill converge to a value outside its admissible range ofð0; pÞ: As a result, a simple solution is to set on ¼ p=2; ormore generally any value in ðp=3; 2p=3Þ; whenever on < 0or on>p: Figure 3 shows the tracking performance of theDFE for very large step changes in frequency at SNR ¼10 dB with the use of the above suggestion, namely, on wasset to p=2 whenever its value was outside ð0; pÞ: The actualfrequency had a value of 0:05p rad=s during the first 4000iterations and then changed instantaneously to 0:95p rad=sand back to 0:05p rad=s; at the 4000th and the 8000thiteration, respectively. We observe that the proposedalgorithm tracked the step-changing frequency accuratelyvia the slight modification.

A comprehensive test was then performed for a widerange of o 2 ½0:05p; 0:95p at SNR ¼ 10 dB; and thesteady state mean square frequency errors (MSFEs) of thethree algorithms were measured and plotted in Fig. 4. Inorder to provide a fair comparison, m was fixed at 0.002while we adjusted the step sizes of [8] and [17] such thattheir convergence times were approximately identical foreach tested frequency. It is seen that the measured MSFEs ofthe DFE agreed with their theoretical values, particularlywhen o was close to 0:5p rad=s: Furthermore, the value ofvarðooÞ was bounded by 2:2 � 10�5 rad2=s2 for o 2 ½0:2p;0:8p rad=s: Interestingly, the frequency dependence of theMSFEs was similar to that of the adaptive Pisarenko’smethod but the DFE had smaller variances for all cases.Although [17] gave the best performance for o< 0:2p ando>0:8p; it had much larger MSFEs for other frequencies,

particularly when o was close to 0:5p rad=s; and it failed towork at this frequency. This test was repeated for a lowerSNR condition of 3 dB and the results are plotted in Fig. 5,which gave similar observations.

Figures 6 and 7 show the estimated noise power using (7)and its variance for different frequencies, respectively, atSNR ¼ 10 dB: Along the frequency axis, the estimatednoise power and variance fluctuated around their nominalvalues, with minimum and maximum values of 9:90 � 10�2

and 1:01 � 10�1; and 1:88 � 10�2 and 2:08 � 10�1;respectively. This implies that for all frequencies, (7)estimated s2

q accurately while the mean square errors of thenoise power estimates agreed with (16), and as expected,their frequency dependence was negligible.

Figure 8 demonstrates the carrier frequency estimationperformance for a noisy binary phase-shift keying (BPSK)

Fig. 3 Frequency estimate of DFE for very large step changesin frequency at SNR ¼ 10 dB

Fig. 4 Frequency variances at SNR ¼ 10 dB

Fig. 5 Frequency variances at SNR ¼ 3 dB

Fig. 6 Estimate of noise power at SNR ¼ 10 dB

Fig. 7 Estimate of varðz2Þ at SNR ¼ 10 dB

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004362

signal where its amplitude as well as phase were nonsta-tionary, at SNR ¼ 10 dB: The baud rate and the carrierfrequency of the BPSK signal was selected as 0:05p rad=sand 0:25p rad=s; respectively, and thus there were40 samples for each symbol. In this example, m ¼ 0:02was used to achieve fast convergence at the expense of alarger variance. We can see that the DFE algorithmconverged at approximately the 100th iteration and anaccurate estimate of the carrier frequency was obtained.

4 Conclusions

A computationally attractive algorithm, called the DFE, hasbeen proposed for tracking the frequency of a real sinusoidembedded in white noise. Using an LMS-style method, thefrequency estimate is adjusted directly on a sample-by-sample basis. Learning behaviour and mean square error ofthe estimated frequency in white Gaussian noise are derivedand verified by computer simulations. It is shown that theDFE gives unbiased frequency estimates in several non-stationary conditions and has high frequency estimationaccuracy when the frequency is neither close to 0 nor p: Inaddition, the DFE outperforms two existing LMS-stylefrequency estimators in terms of estimation accuracy,computational complexity and=or tracking capability. It isnoteworthy that the proposed LMS algorithm will givebiased frequency estimation for a linearly varying frequencybecause in this case, the recurrence of (2) will not holdexactly. Moreover, the algorithm development assumeswhite noise, and if the noise is coloured with knownspectrum, one possible solution is to filter the noisy sinusoidby a linear whitening filter which makes the noisecomponent at the filter output white. Accurate frequencyestimation in the scenarios of linear variation of signalfrequency and=or unknown coloured noise will be ourfuture research directions.

5 Acknowledgment

The authors would like to thank the anonymous reviewersfor their helpful and constructive comments that improvedthe clarity of the paper.

6 References

1 Kay, S.M.: ‘Fundamentals of statistical signal processing: estimationtheory’ (Prentice-Hall, Englewood Cliffs, NJ, USA, 1993)

2 Stoica, P., and Moses, R.: ‘Introduction to spectral analysis’ (Prentice-Hall, Upper Saddle River, NJ, USA, 1997)

3 Quinn, B.G., and Hannan, E.J.: ‘The estimation and tracking offrequency’ (Cambridge University Press, 2001)

4 Kenefic, R.J., and Nuttall, A.H.: ‘Maximum likelihood estimation of theparameters of tone using real discrete data’, IEEE J. Ocean. Eng., 1987,12, (1), pp. 279–280

5 Pisarenko, V.F.: ‘The retrieval of harmonics by linear prediction’,Geophys. J.R. Astron. Soc., 1973, pp. 347–366

6 Stoica, P., and Eriksson, A.: ‘MUSIC estimation of real-valued sine-wave frequencies’, Signal Process., 1995, 42, pp. 139–146

7 Griffiths, L.J.: ‘Rapid measurement of digital instantaneous frequency’,IEEE Trans. Acoust. Speech Signal Process., 1975, 23, (2), pp. 207–222

8 Thompson, P.A.: ‘An adaptive spectral analysis technique for unbiasedfrequency estimation in the presence of white noise’, Proc. 13thAsilomar Conf. on Circuits, Systems and Computing, Pacific Grove,CA, USA, Nov. 1979, pp. 529–533

9 Widrow, B., McCool, J., Larimore, M.G., and Johnson, C.R., Jr.:‘Stationary and nonstationary learning characteristics of the LMSadaptive filter’, Proc. IEEE, 1976, 64, (8), pp. 1151–1162

10 Etter, D.M., and Hush, D.R.: ‘A new technique for adaptive frequencyestimation and tracking’, IEEE Trans. Acoust. Speech Signal Process.,1987, 35, (4), pp. 561–564

11 Etter, D.M., and Stearns, S.D.: ‘Adaptive estimation of time delays insampled data systems’, IEEE Trans. Acoust. Speech Signal Process.,1981, 29, (3), pp. 582–586

12 Dooley, S.R., and Nandi, A.K.: ‘Fast frequency estimation and trackingusing Lagrange interpolation’, Electron. Lett., 1998, 34, (20),pp. 1908–1910

13 Cain, G.D., Murphy, N.P., and Tarczynski, A.: ‘Evaluation of severalFIR fractional-sample delay filters’. Proc. IEEE Int. Conf. on Acoustics,Speech and Signal Processing, ICASSP, Adelaide, Australia, 1994,pp. 621–624

14 Laakso, T.I., Valimaki, V., Karjalainen, M., and Laine, U.K.: ‘Splittingthe unit delay’, Signal Process. Mag., 1996, 13, (1), pp. 30–60

15 Li, G.: ‘A stable and efficient adaptive notch for direct frequencyestimation’, IEEE Trans. Signal Process., 1997, 45, (8), pp. 2001–2009

16 Bencheqroune, A., Benseddik, M., and Hajjari, A.: ‘Tracking of time-varying frequency of sinusoidal signals’, Signal Process., 1999, 78,pp. 191–199

17 Sheu, M., Liao, H., Kan, S., and Shieh, M.: ‘A novel adaptive algorithmand VLSI design for frequency detection in noisy environment based onadaptive IIR filter’. Proc IEEE Int. Symp. on Circuits and Systems,Sydney, Australia, May 2001, vol. 4, pp. 446–449

18 So, H.C., and Ching, P.C.: ‘Analysis of an adaptive single-tonefrequency estimation algorithm’. Proc. IASTED Int. Conf. on Signaland Image Processing 2000, Las Vegas, NV, USA, Nov. 2000

19 So, H.C.: ‘Least mean square algorithm for unbiased impulse responseestimation’. Proc. 45th IEEE Midwest Symp. on Circuits and Systems,Tulsa, Oklahoma, USA, Aug. 2002, vol. 2, pp. 412–415

20 So, H.C., and Chan, Y.T.: ‘Analysis of an LMS algorithm for unbiasedimpulse response estimation’, IEEE Trans. Signal Process., July 2003,51, (7), pp. 2008–2013

21 So, H.C.: ‘A closed form frequency estimator for a noisy sinusoid’.Proc. 45th IEEE Midwest Symp. on Circuits and Systems, Tulsa,Oklahoma, USA, August 2002, vol. 2, pp. 160–163

22 So, H.C., and Ip, S.K.: ‘A novel frequency estimator and itscomparative performance for short record lengths’. Proc. XIEuropean Signal Processing Conf., Toulouse, France, Sept. 2002,vol. 3, pp. 445–448

23 So, H.C.: ‘A comparative study of three recursive least squaresalgorithms for single-tone frequency’, Signal Process., 2003, 83, (9),pp. 2059–2062

24 So, H.C., and Chan, K.W.: ‘Reformulation of Pisarenko harmonicdecomposition method for single-tone frequency estimation’, IEEETrans. Signal Process., 2004, 52, (4), pp. 1128–1135

25 Prony, R.: ‘Essa: Experimentale et analytique’, J. Ecole Polytechnique,Paris, 1795, pp. 24–76

26 Treichler, J.R.: ‘g-LMS and its use in a noise-compensating adaptivespectral analysis technique’. Proc. Int. Conf. on Acoustics, Speech andSignal Processing, April 1979, pp. 933–936

27 So, H.C.: ‘Adaptive single-tone frequency estimation based onautoregressive model’. Proc. X European Signal Processing Conf.,Tampere, Finland, September 2000

28 Jaggi, S., and Martinez, A.B.: ‘A modified autoregressive spectralestimator for a real sinusoid in white noise’. Proc. Southeastcon, 1989,pp. 467–469

29 Benveniste, A., Metivier, M., and Priouret, P.: ‘Adaptive algorithmsand stochastic approximations’ (Springer Verlag, 1990)

7 Appendix

The steady state mean square error of oon is derived asfollows. Subtracting o from both sides of (9), squaring bothsides, taking expectation and then considering n ! 1 yields

2 limn!1

Efðoon � oÞen½ðxn þ xn�2Þ cosðoonÞ þ xn�1g

¼ m limn!1

E e2n½ðxn þ xn�2Þ cosðoonÞ þ xn�12

� �ð17Þ

Suppose m is chosen sufficiently small such that oon ! oupon convergence. The component which involves both

Fig. 8 Frequency estimate of DFE for an BPSK signal atSNR ¼ 10 dB

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004 363

signal and noise in the RHS of (17) is approximated as

mEn½qn � 2 cosðoÞqn�1 þ qn�22½a cosðon þ fÞ cosðoÞ

þ a cosðoðn � 2Þ þ fÞÞ cosðoÞ

þ a cosðoðn � 1Þ þ fÞ2o

¼ mEn

q2n þ 4 cos2ðoÞq2

n�1 þ q2n�2

� �� a2

� ½cos2ðon þ fÞ cos2ðoÞþ cos2ðoðn � 2Þ þ fÞ cos2ðoÞ þ cos2ðoðn � 1Þ þ fÞþ 2 cosðon þ fÞ cosðoðn � 2Þ þ fÞ cos2ðoÞþ 2 cosðon þ fÞ cosðoðn � 1Þ þ fÞ cosðoÞ

þ 2 cosðoðn � 1Þ þ fÞ cosðoðn � 2Þ þ fÞ cosðoÞo

¼ 2ms2ss

2qðcosð2oÞ þ 2Þ3 ð18Þ

Furthermore, the component due to noise only in the RHS of(17) can be estimated as

mEf½qn � 2 cosðoÞqn�1 þ qn�22

� ½cosðoÞqn þ cosðoÞqn�2 þ qn�12g¼ mE

� q2

n þ 4 cos2ðoÞq2n�1 þ q2

n�2 � 4 cosðoÞqnqn�1

þ 2qnqn�2 � 4 cosðoÞqn�1qn�2Þ cos2ðoÞq2

n

þ cos2ðoÞq2n�2 þ q2

n�1 þ 2 cos2ðoÞqnqn�2

þ 2 cosðoÞqnqn�1 þ 2 cosðoÞqn�1qn�2

�g

¼ 2ms2qðcosð2oÞ þ 2Þ2 ð19Þ

On the other hand,

limn!1

Efðoon �oÞen½ðxn þ xn�2ÞcosðoonÞþ xn�1g

¼ limn!1

Efðoon�2 �oÞen½ðxn þ xn�2ÞcosðoonÞþ xn�1g

� mX2

i¼1

limn!1

Efen�i½ðxn�i þ xn�i�2Þ

� cosðooðn� iÞÞ þ xn�i�1� en½ðxn þ xn�2ÞcosðoonÞþ xn�1g

limn!1

Efðoon �oÞen½ðxn þ xn�2ÞcosðoonÞþ xn�1g

� mX2

i¼1

limn!1

Efen�i½ðxn�i þ xn�i�2ÞcosðoonÞþ xn�i�1

� en½ðxn þ xn�2ÞcosðoonÞþ xn�1g ð20Þ

With the use of (13), the first term of (20) can be evaluated as

limn!1

Efðoon�oÞen½ðxnþxn�2ÞcosðoonÞþxn�1g

¼2s2s lim

n!1E ðoon�oÞ sin

oon�o2

� �sin

3ðoonþoÞ2

� ���

þ sinoonþo

2

� �sin

3ðoon�oÞ2

� ���

s2s lim

n!1E ðoon�oÞ2 sin

3ðoonþoÞ2

� �þ3sin

oonþo2

� �� �� �

s2svarðooÞðsinð3oÞþ3sinðoÞÞ

ð21Þ

It can also be shown that

limn!1

Efen�1½ðxn�1 þ xn�3Þ cosðoonÞ þ xn�2

� en½ðxn þ xn�2Þ cosðoonÞ þ xn�1g

Ef½qn�1 � 2 cosðoÞqn�2 þ qn�3

� ½ðxn�1 þ xn�3Þ cosðoÞ þ xn�2

� ½qn � 2 cosðoÞqn�1 þ qn�2

� ½ðxn þ xn�2Þ cosðoÞ þ xn�1g

¼ �4s2ss

2q cos

2ðoÞðcosð2oÞ þ 2Þ2

þ s4qð4 cos4ðoÞ � 12 cos2ðoÞ þ 1Þ ð22Þ

and

limn!1

Efen�2½ðxn�2 þ xn�4Þ cosðoonÞ þ xn�3

� en½ðxn þ xn�2Þ cosðoonÞ þ xn�1g

Efðqn�2 � 2 cosðoÞqn�3 þ qn�4Þ

� ½ðxn�2 þ xn�4Þ cosðoÞ þ xn�3

� ðqn � 2 cosðoÞqn�1 þ qn�2Þ½ðxnþxn�2Þ cosðoÞ þ xn�1g

¼ s2ss

2qð2 cos2ðoÞ � 1Þðcosð2oÞ þ 2Þ2 þ 2s4

q cos2ðoÞ

ð23Þ

Substituting (18)–(23) into (17) and after simplification, weobtain (15).

IEE Proc.-Radar Sonar Navig., Vol. 151, No. 6, December 2004364


Recommended