+ All Categories
Home > Documents > RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. '...

RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. '...

Date post: 25-Sep-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
27
AD-RI93 130 GRAN-SCHMIDT IMPLEMENTRTION OF A LINEARLY CONSTRAINED ± RDRPTIVE ARRAY(U) NAYAL RESEARCH LAB WRSHINGTON DC UWLCA~~jFItjK GERLACH 26 FED 88 I 79U mocL SSoFEDo/ I?/S L l m Em mh h h hmh EmhI
Transcript
Page 1: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

AD-RI93 130 GRAN-SCHMIDT IMPLEMENTRTION OF A LINEARLY CONSTRAINED ±

RDRPTIVE ARRAY(U) NAYAL RESEARCH LAB WRSHINGTON DC

UWLCA~~jFItjK GERLACH 26 FED 88 I 79U

mocL SSoFEDo/ I?/S L l

m Em mh h h hmh EmhI

Page 2: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

Mail1 1.00031

U411 Lim' 11l2

1.8

11IL25 111.4 1.6

MICROCOPY RESOLUTION TEST CHARI

Z~ Ft % %

%p

%______%__

- v-- .j-- ----

Page 3: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

-'Naval Research LaboratoryWashnagton. DC 20375-5000(A

NIL Report "S6

Gram-Schmidt Implementation of a0Linearly Constrained Adaptive Array

tiKARL GEBLACH

(Taget awacetics Brach

Radar Division

February 26, 1988

!. < DTICSMARI I 1 98p

i '. 'Approved for public release; distribution unlimited.

88 3 09 040%.

Page 4: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

SECURITY CLASSIFICATION OF THIS PAGE 1Form Approved

REPORT DOCUMENTATION PAGE OMBNo 0704-0188

la REPORT SECURITY CLASSIFICATION lb RESTRICTIVE MARKINGS

UNCLASSIFIED2a SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTION!AVALABILITY OF REPORT

2b DECLASSIFICATION 'DOWNGRADING SCHEDkI Approved for public release: distribution unlimited.

4 PERFORMING ORGANIZATION REPORT NUMBER(S) S MONITORING ORGANIZATION REPORT NUMBER(S)

NRL Report 90566a NAME OF PERFORMING ORGANIZATION 6b OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION

(If applicable) Sys

Naval Research Laboratory NRL Space and Naval Warfare Systems Command

6c. ADDRESS (City, State, and ZIP Code) 7b ADDRESS (City, State, and ZIP Code)

Washington, DC 20375-5000 Washington. DC 20363-5100

8a NAME OF FUNDINGiSPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBERORGANIZATION (If applicable)

Chief of Naval Research CNOBc. AODRESS (City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS

PROGRAM PROJECT TASK WORK UNITELEMENT NO NO NO ACCESSION NO U

Arlington. VA 22217-5000 62712N (see page ii) (see page ii)

11 TITLE (Include Security Classification)

Gram-Schmidt Implementation of a Linearly Constrained Adaptive Array

12 PERSONAL AUTHOR(S)

Gerlach. Karl13a TYPE OF REPORT 13b TIME COVERED 14 DATE OF REPORT (Year, Month, Day) I5 PAGE COJN,

Interim IROM_ -o 26 February 1988 2516 SPPEMENTARY NOTATION

.%

7? COSATt CODES B SUBjECT TERMS !Continue on reverse if necessary and identify by block number) IF-ED GROUP SUB-GROUP YAdapti c filter,

RadarAdaptr,6e cancellation . N

'9 ABS rRACT IContinue on reverse if necessary and identify by block number)

-"-4 A Gram-Schmidt (GS) implementation of the linearly constrained adaptive algorithm proposed by Frost is "S-

developed. This implementation is shown to be equivalent to the technique developed I.. , "..ffi ., ittueklTerA- %whercby the constrained problem is reduced to an unconstrained problem. In addition, analytical results are presented

for the convergence rate when the Sampled Matrix Inversion (SMI) algorithm is employed. It had been previously

shown that the steady state solution for the optimal weights is identical for both constrained and reduced unconstrained 7-

problems. .1ihis report,.iL.dshowzkthat if the SMI or GS algorithms are employed, then the transient weighting vectorsolution for the constrained problem is identical to equivalent transient weighting vector solution for the reduced uncon-strained implementation..., ( -

20 DISTRIBUTION AVAILABILITY OF ABSTRACT 21 ABSTRACT SFCuR1v CI ASSFftCATION

K UNCLASSIFIEDUNLIMITED £1 SAME AS RPT l DTIC USERS UNCLASSIFIED22a NAME OF RESPONSIBLE (NOIVIDIjAL 22b TELEPHO,,N (Include Are4 Code) 2.( OFF CE SYMBOL

Dr. Karl Gerlach (202) 767-3599 5340. IG %

00 Form 1473, JUN 86 Prev,ous edtons are obsolete sIL( P TV (.ASSCAT ON OP TS PALE

%

- %% . " . %%. ," ."% %" "" % " ' %% " %"% % % ,% %" % %%...

6". rPd pP r%?.

Page 5: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

m Ia... 1w -- ..tt K.

SECURITY CLASSIFICATION OF THIS PAOC

10. SOURCE OF FUNDING NUMBERS

Project No. Work Unit Accession No.

RS 12-13 1-001 DN480-549XF12-141-100 DN380-101

1

4.k

*-P

F % r - . %A , . 0%- -%. %If~~~. d'F fo .. ro

Page 6: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

CONTENTS

I. INTRODUCTION .......................................... 1

H. GS CANCELLERS ................................................................. 2

III. NORMALIZED GS CANCELLER.................................................4

IV. NORMALIZED FAST ORTHOGONALIZATION NETWORK (FON) .............. 6

V. NORMALIZED REDUCED FONS ................................................. 8

VI. GS IMPLEMENTATION........................................................... 9

VII. SINGLE CONSTRAINT IMPLEMENTATION EXAMPLE ........................ 12

VIII. SIMPLIFIED MULTIPLE CONSTRAINT IMPLEMENTATION ................... 13

IX. THE AUGMENTED CONSTRAINT MATRIX .................................... 16

X. CONVERGENCE RATE...................... .................................. 17

XI. REFERENCES .................................................................... 20

LVCTA- -

J..<.d..

[L.

%' % ~.........................

Page 7: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

GRAM-SCHMIDT IMPLEMENTATION OF ALINEARLY CONSTRAINED ADAPTIVE ARRAY

I. INTRODUCTION

Unconstrained adaptive antenna arrays for certain external noise interference scenarios can resultin the cancellation of a desired signal along with the cancellation of the interfering signals. Frost(1,21 introduced a constrained optimization procedure such that certain main beam antenna propertiesare maintained during the adaptation process thus preserving the desired signal.

Consider an N input adaptive antenna array as shown in Fig. 1. We define a vector of weightsw as

W = (W 1 , W 2 , , WN)

where T denotes the vector transpose operation. Frost [1] shows that if the weights are constrained to %satisfy the following linear constraint equation YJ.-

C'w = f,(1)

where

M is the number of constraint equations,C is the N x M constraint matrix,f is the M x I column constraint vector,

and t denotes the conjugate transpose vector operation, then the average output noise power residueof z = w'x is minimized if

W R, = C R-ICCt C)-'f. (2)

where

X = (x.x 2, 2 . XN)T "

Rx = EIxx' is the input covariance matrix, and EII denotes the expected value.

In this report we develop an open-loop Gram-Schmidt (GS) implementation of w'x where w is 0defined by Eq. (2) or equivalently an implementation such that

z = f'(C'R 1 'C)-nC'R 1 'x. (3)

The GS implementation of a linearly constrained adaptive array offers many advantages. TheGS open-loop technique has been shown to yield superior performance simultaneously in arithmetic

Manuscript approved June N8, 1987.

,.

Page 8: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

X1 X2 XN

W; xi w;-- W

Z

Fig. 1 - Adaptive array

efficiency, stability, and convergence rate [3-7] over other adaptive algorithms. In particular, the sta-bility of the GS algorithm is enhanced because it does not require the calculation of an inverse covari-ance matrix as does the Sample Matrix Inversion (SMI) algorithm [8]. Also the GS cancelleralgorithm is very suitable for a non-stationary noise environment because the adaptive weights can beupdated in a numerically efficient manner, using "sliding window" or systolic techniques on the inputdata instead of "batch" processing.

Jim, Griffiths, and Buckley [9,10,111 have shown that the constrained minimization problem canbe reduced to an unconstrained implementation called the generalized sidelobe canceller (GSC). Inthis report, we develop an equivalent implementation of this technique. It had been previously shownthat the steady state solution for the optimal weights is identical for both constrained and reducedunconstrained problems. In this report, it is shown that if the SMI or GS algorithms are employed,then the transient weighting vector solution for the constrained problem is identical to equivalent tran-sient weighting vector solution for the reduced unconstrained implementation. In Sections II throughV, we develop the basic building blocks for the GS implementation of the linearly constrained adap-tive array. In Section VI, the implementation is presented. In Section VII, the special case whenthere is only one constraint is discussed. In Section VIII the multiple constraint implementation issignificantly simplified. In Section IX, the Jim, Griffiths, and Buckley implementation is derived.Finally, in Section X, analytical results are presented for the convergence rate of the constrainedminimization implementation when the SMI algorithm is employed.

II. GS CANCELLERS

Consider the general N-input open-loop GS canceller structure as seen in Fig. 2(a). Let x1, x2,xN represent the complex data in the Ist, 2nd, Nth channels, respectively. We call the leftmost

input (x1). the main channel, and we call the remaining N - I inputs the auxiliary channels. The,

canceller operates so as to decorrelate the auxiliary inputs one at a time from the other inputs byusing the basic two-input GS processor shown in Fig. 2(b). For example, as seen in Fig. 2(a), in thefirst level of decomposition, xN is decorrelated with x 1, x , XN - 1. Next, the output channel thatresults from decorrelating XN with .xN - is decorrelated with the other outputs of the first level GSs.The decomposition proceeds until a final output channel is generated. If the decorrelation weights ineach of the two input GSs are computed from an infinite number of input samples, then this outputchannel is totally decorrelated with the input: x-, x3, . xN.

Let xm) represent the outputs of the two input GSs on the m - I th level. Then outputs of thetwo-input GSs at the m th level are given by

(m) =(in) - (m n =1, 2 ".N -m.nn n -N-,+1, m = 1,2,. .,N - 1. (4)

2"o'.

Page 9: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

MAINCHANNEL AUXILIARY CHANNELS

XI X2 ICN-3 XN- 2 XN- XN -

Levell I S GS GS GS GS

x 2 xN-3 XN-2 N-i

Level 2 OS O S O

X 2

XN-3 XN-2

Level 3 GS GS GS

I 4) X4

(4')

-3N

10pLevelN - 2OS G

S),(N)

Fig. 2(a) - GS structure

IGS I%

W Xi2 IX

I -

2 '0

Fig. 2(b) - Basic two-input GS canceller

3

Page 10: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH h

Note that x41) = x,~ The weight w (M), seen in Eq. (1), is computed so as to decorrelate x M + 1) withxrj2,. -For K input samples per channel, this weight is estimated as

K

w(M) =1 5

k=1

where * denotes the complex conjugate and I -I is the complex magnitude. Here k indexes thetime-sampled data.

We simplify the N-input GS canceller structure by the representation as seen in Fig. 3.

X1 X2 X2. N

G S

Fig. 3 - GS representation

III. NORMALIZED GS CANCELLER

The GS canceller, as represented in Fig. 3, effectively weights the vector x with a vector* W (W, I, WN)T such that y* = x, where

0

* 0w tikv(6)

0

w I, and pi is a scalar constant to be determined. Because vi, 1,I the scalar constant p has aspecific value. If

RU (r"n)). n, m 1. 2, .N, (7)

4

V or %% 40 % %no t;..%% %

............................................... *

Page 11: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

where r("") are the elements of R, then we can show

~~' (8) -

Hence

l~r*15

0

w R;(9)

Consider a configuration where we normalize the output y by the average power of y as seen inFig. 4. Note that the average power after normalization is not one. Thus

I 1Z tl W1 X.I (10)

However,

Ej y~ =El I~wX 2j (1

- wElxx'lwA

S-04

l/rO 1)

0

(I (1r")*, 0,~ 0 )RL;R~R

0

mc- (1rr- ~ -I,~ ..0).Rj.'

Page 12: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

1 2 XN

GS

Fig. 4 - Normalized GS canceller5-xY

4 1,? El I y12a

Now since R., is hermitian, then R.,; is hermitian and r I1 is real. Thus

E Ily2 1 1 (12)

Hence

z = r( )w/x, (13)

or substituting Eq. (9) into Eq. (13),

Z =(1, 0, " ,0) R l"x. (14) '

IV. NORMALIZED FAST ORTHOGONALIZATION NETWORK (FON)

A FON is a numerically efficient implementation of a complete GS network where each input isorthogonalized with every other input [121. In essence, the FON implements the network seen in Fig.5. The ordering of the input channels for decorrelation as seen in Fig. 5 was arbitrary. Reference 12shows that the input channels can be ordered so as to greatly reduce the required number of arith-metic operations. If there were no logic behind choosing the ordering of the input channels, it can beshown that the number of weights that are calculated by using this decorrelation procedure is 0.5N2

(N - 1). In Ref. 12 an algorithm was developed that requires approximately i.5N (N - 1) weights %for the same decorrelation process.

We represent the FON implementation of Fig. 5 as seen in Fig. 6. Consider an implementationwhere each output of the FON is normalized with respect to the power of that output as shown in Fig.

7. We call this configuration a normalized FON. If we define z = (z 1, z, , z, )7. then we canshow that the normalized FON is equivalent to multiplying the vector x by an N x N matrix ofweights w such that

Z = w X. (15)

-4%

'-"%

't J

% % % %% % %

Page 13: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

x X2 XN X2 X3 X X1 XN Xl XNl I

GS GS .. GS

Y1 Y2

YN

Fig. 5 - Orthogonalization network ..-

FONNXN 7-." ?:

121 YFig. 6 - FON representation

:"%-P

Xi X2 X N %

FONNXN

1 2 N. -

x x xY 1 2 N'"

y . , , y %"• **° I 0

2 ZN ZN

Fig. 7 -- Normalized FON f"%

_ _

-t.'.'

7

%0

Page 14: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

where

w = Rx;'. (16)

Note that we used the methodology of the previous section. Hence

z = R tx. (i)

We represent a normalized FON as shown in Fig. 8.

X X X2 N*

FONNXN

r rr

POWER ,Fig. 8 - Normalized FON representation NORMALIZER

Z1I Z2 ZN

V. NORMALIZED REDUCED FONS

A normalized reduced FON orthogonalizes only a fraction of the inputs with respect to oneanother. For example if we desire only to orthogonalize x1, x 2 , *. XM, where M <N, then areduced FON would efficiently implement the configuration as seen in Fig. 9. Ifz = (z1 , -2, ". ZM) r . then we can show that a normalized reduced FON is equivalent to multiply-ing the vector x by an M x N matrix such that

Z wx (18)

and

w = R 'IN M , (19)

where

1 0 0

0 1 0100

IN. = - I N x Mmatrix.

0 0 0

8.

Page 15: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

X1 X2 XN X2 X3 XN X1 XM XM+1XN XMI1

GS GS GS

U.

POWER NORMALIZER

ZZ 21

M

Fig. 9 -Normalized reduced FON

For example. 13.2 0 =

Hence 1Z =ImNARXV'X. (20)

We represent a normalized reduced FON -as seen in Fig. 10.

X1 X2 XN '

EDUCED O

NXM

21 N%

POWERNORMALIZER

ZI ZI1 ZjFig. 10- Normalized reduced 5

I-ON represeflaton 5

VI. GS IMPLEMENTATION 5

We now present the GS implementation of the linearly constrained adaptive array. First. wedefine an N x N augmented matrix, C,,,, such that -

=tu IC I1)1 (21)

C) 0

-. S S.S -... .... ... ... ... .- .~ .S~5 ~9

Page 16: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

where D is an N x (N - M) matrix such that Caug is nonsingular and hence invertible. We discussthe choice of D in more detail in Section IX.

The GS implementation of the linearly constrained adaptive array is shown in Fig. 11. Notethat

x = (xI, x 2 , , XN)T

U = (U, U2 , UN

.)

v = (V1 , v 2 ,., VM) r (22)

Y = (YI, Y2,' YM) r

P = (Pl, P2, ,PM)T

z = scalar.

* 11

c-iaug

U 4

REDUCED FONNXM ',

P

POWERNORMALIZER

V

FON

Fig. I1 - GS implementation of a linearly MXMconstrained adaptive array

POWER

NORMALIZER

Y

X ft'

A10 01

,,,..j

io Q b - *** f, (, ' . . ~ ~ ~ V 4.

Page 17: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

* ~ *,- - - - .-

NRL REPORT 9056 ,.

From our preceding discussion in FONs and reduced FONs, we know that

u g x, (23) -*

V = M ,N R.;'u, (24)

y = R 1 v, (25)

z = ft y. (26) ,,.

Now

R.u =E[Ca -x (C -x)t CaRu Ctau'g (27)aug CagX aug aug '(7

or

R'- Cag RxI Caug (28)

HenceV = IM,N Caug R xx aug aug-I

R' CugX (29)

IM.N Caug R,'x.

We can show that

Ct = 'MN Ct1,g. (30)

Thus

v Ct RJ- 1x. (31)

Now

R,.,.= EI(C' R u i x) (C R x) ] I

= C'R Elxx'l R-' C

= C' R, "RP R' C 0

Ct, R -- C.X.1.

= c' Tic. ' Z

Thus

R = (C' Rj; C) - (33)

II •

_ .r , • t " ,m'.e" /.' i?:' :• t z' ' " . .. ' .. , M?.:* € . - a.€. WW ort- -v r

Page 18: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

Hence4,

y = (Ct Rj' l C)-1 -v (34)

= (C'R1C)-' C'RJ' x.

Finally, by using Eq. (26),

z =f (C R--' C)- I C' R x. (35)

- Note that this is the same weighting of x as given by Eq. (3).

VII. SINGLE CONSTRAINT IMPLEMENTATION EXAMPLE

Note that when M = i, we set z = p (see Fig. 11) and the function blocks after p are notused. This results because the M x M FON seen in Fig. 11 is not used, f = 1, and data passingthrough successive power normalizations is unchanged. We can show that

z - C' R' lx. (36)

For this case C = (cI, C2 , " CN) = c, a vector. Note that the reduced FON is just a single GScanceller. Hence for a single constraint, the configuration is shown in Fig. 12. t..I

I-

x -4

1 2 N

~C" 1

aug

12 I

'.Fig. 12 - GS implementation for a GSi"

,-' ~single linear consraint'.,,

',' .4 !

• 12

%*4 . •4% . 4. . . .. . . . . . .,

Page 19: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

We arbitrarily set c1 = I and augment C as shown below:

1 0 0 . 0 - 0 0...0

C2 1 0 ... 0 -c 2 1 0 ... 0

Ca- = . (37) 'Aaug %

CN 0 0 .. 1 --CN 0 0 ... I

Hence the single constraint processor of Fig. 12 can be implemented as shown in Fig. 13.

1.4-

Xi X 2 X 3 X N €X Xl

Fig. 13 - Efickient GS implementation of asinglelinearly constrained adaptiv.e array.

.5%

VIII. SIMPLIFIED MULTIPLE CONSTRAINT IMPLEMENTATION ..

The GS structure of the linearly constrained array given in Fig. I11 can be significantly simipli-fied as follows. The N X M reduced FON structure can be functionally decomposed as shown inFig. 14. Here, we denote u 1, 142. * m as the inputs associated with the outputs of the N x Mreduced FON. We call u 1, u,, umS the primary channels and write them as an M length vector.u'. We also denote um, 1, UM + 2- u,% as the auxiliary channels and write them as an N - Mlength vector, uaux. Note that embedded in the N x M reduced FON is an M x M FON. HenceFig. I I can be redrawn as seen in Fig. 15.

Now it is easy to show that the operation of two successive M x M normalized FONs isequivalent to multiplying the input data to these FONs by an M x M identity matrix. Hence, theimplementation seen in Fig. 15 reduces to that shown in Fig. 16. Moreover, this structure can be

13

VI7&. SIMPLIFIED MULTIPLE CONSTRAINT IMPLEMENTATION ... ,,

Page 20: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

N4U%' r6II WNW

KARL GERLACH

U1 U Ua U2 U U UM U Ua... au 2 M ax dau

GS GS GS

PIP 2 P" -

(a) P.

U1 Ua U Ua UM UaI a au12 aa jjau

GS GS GS

MXM FON

tP *1e

Fig. 14 -Equivalent N x M FON structure% (a) Reduced Iv x M FON.(h) Functional decon .wsmn ol reduced N x M VON

14p

% % % % % % % % %

Page 21: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

1 ~ux 2Uaux M aux "~

POWER NORMALIZER

MXM EON

Fig. 15 - Functional equivalent ofcanceller seen in Fig. I I

POWER NORMALIZER

y%

ft

j "l IX 2

XIN

-%1Caug .

U1 U3,. U 2 UaLA LJM

Fig. 16 Simplified equivalent otcanceller seen in Fig. I I

% %5

Page 22: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH

further simplified to the structure shown in Fig. 17, where the main channel input to the GS can-celler, urn, is given by

M

ui= fm Um, (38)m=1

X X2.. XN Ic-i VU I U 2 U M U M+1 U M+ 2 U N

Uin

U.

Fig 17 Sinpliied GS niplementalion of alineark constrained adaptnie arra

IX. THE AUGMENTED CONSTRAINT MATRIX

Here we show that the generalized sidelobe canceller (GSC) implementation of a linearly con-strained array presented in Refs. 9, 10, and II is equivalent to the implementation discussed in Sec- .tion VIII. We define

SB 39)

where A is an M x N matrix and B is an (N- M) x N matrix. Thus

AC IAD II0

C=R, C.,, - C D ] 0 .. . (40)BC IBD

where IM and IN M are M xM and (N - M) x (N - M) identify matrices, respectively. As aresult

16

%...%... .. ........ ftA......

Page 23: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

NRL REPORT 9056

AC = lM, (41) r

AD = 0, (42)

BC = 0, (43)

BD= N-M (44)

The solutions for A and D using pseudoinverses are

A = (C tC) - ' C' + H (45)

D = B'(BBt) - ' + G (46)

where H is any M x N matrix satisfying the condition

HC = 0 (47)

and G is any (N - M) x N matrix satisfying the equation

BG = 0. (48)

Note from Eq. (43) that rows of B are orthogonal to C, the constraint matrix. In the literature110,11], B is called the blocking matrix. We can eliminate having to find -a- by merely defining aB that satisfies Eq. (42) and an A that is given by Eq. (45) where H satisfies (47).

The linearly constrained canceller now has the form as shown in Fig. 18. If we set H = 0 anddefine wq to be the quiescent weighting (no external noise, R, = I), then

wq =C (C C) f. (49)

Thus the linearly constrained processor can be implemented as shown in Fig. 19, which is identical tothe GSC presented in Refs. 9, 10, and I I.

X. CONVERGENCE RATE P;

One technique for estimating the optimal weighting vector for a linearly constrained adaptive '-

array is the Sampled Matrix Inversion (SMI) algorithm 181. We will show in this section that thistechnique has fast convergence properties when applied to an adaptive array with linear constraints.This open loop algorithm is implemented by estimating the input covariance matrix, R,,. using thesamples of data in the input channels. The estimated R, is then substituted into Eq. (2) and the con-strained optimal weights, w, are then estimated. Call this estimate *sm1, It is easy to show using ananalysis similar to that presented in Section VI that the multiple constraint GS implementation usingthe same samples as the linearly constrained SMI algorithm yields an exact equivalent estimated linear ,-weighting vector, W(;s, as the SMI algorithm, i.e., w(iS = wSMi. This is done by merely substituting -

R, for EIx 'x1l = R,, in the equations given in Section IV. Hence the GS and SMI implementa-tions of the linearly contrained adaptive array are identical in the transient state as well as the steadystate. The convergence rate properties of GS implementation of a linearly constrained array (and alsothe SMI implementation) as shown in Fig. 18 can be easily analyzed. This is because the open-loop

17

\ V -%

Page 24: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

, , ,. _i, - , , % -,, - . . - - . .- -A.

KARL GERLACH

VX

F-(C C)

"' C + H /B

GS

NTE: BC = 0

HC = 0

z

Fig. 18 - Special case of the linearly constrainedcanceller implementation

,5.

BX

q

GSRG

'4

Fig 19 GSC configuration

GS canceller is the exact equivalent of the mainbeam gain constrained open-loop SMI algorithm (71whose convergence properties are well known 18,13). The gain in the steering vector direction isconstrained to equal one, which is equivalent to setting the weight in the main channel equal to onefor the GS canceller. We quote these convergence results. Let there be L input channels and K zeromean Gaussian samples per channel, where the samples are independent from time sample to timesample across all channels. Let * be the estimate of the optimum weights w,,P, using the SMI algo-rithm, and let R be the input covariance matrix (including main and auxiliary channels). Define

& = *' R *. (50)

02,= EIw,(,m R w, I, (5)I

*B. % , 4 *-*-'4'''I-- ... %4'o . % '.4.d-. '. - " t .B * " ."-.' -.

Page 25: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

.NRL REPORT 9056

and2a

z = -2 n (52)

We note that a is a random variable and the output noise power residue caused by finite samplingwhen the weights are applied to a data set independent of the data set used to calculate the weights.

Under the conditions stated, Brennan and Reed [13] showed that z has the following probabilitydensity function (p.d.f.)

'p.,-.

({o K! (Z - )L - 2 -?

p~z) (L -2)' (K'- L + 1)! z+ ,1<z< .--

(53) aotherwise. (t,

The mean of z is given by

KEWe a K aL + (54)

We can apply these results to the adaptive linearly constrained array implementation shown inFig. 18. If the input channels are zero mean, Gaussian, and independent from time sample to time p,"

sample, then the output channels after transformation by the N x N matrix, Ca , are also zero meanGaussian and independent from time sample to time sample. Moreover, uj,, as given by Eq. (38), isalso a zero mean Gaussian random variable. Hence the inputs to the GS canceller satisfy the condi-tions given by the Brennan and Reed analysis [13].

For the constrained implementation, L = N - M + 1. Thus if z is the normalized outputnoise power of the linearly constrained array as defined by Eq. (52), then z has the following p.d.f.and mean

K! (z - 1)N - M- ",

p(z) = (N- M -l)! (K - N + M)! +1 Z < 00 %

, otherwise. (55)

E jzl = K( 56)

Let K3dB be the number of samples needed so that averge output noise power is within 3 dB of theoptimum. Using Eq. (56). we can show that

K ,B = 2 (N - M). (57) -'.

Now for an unconstrained adaptive array with N inputs. it has been shown that KIM = 2N - 2.Hence, we see from Eq. (57) that a constrained array converges faster for M _ 2 than an uncon-strained array under the assumptions previously stated.

19

O. J~ .. 9 'e

~% %,~W %~%.V9*,Pjp9 , - a -

Page 26: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

KARL GERLACH %

XI. REFERENCES

I. O.L. Frost I1, "An Algorithm for Linearly Constrained Adaptive Array Processing," Proc.IEEE 60(8), 926-935 (1972).

2. R.A. Monzingo and T.W. Miller, Introduction to Adaptive Arrays (John Wiley and Sons, NewYork, 1980), Chap. 8.

3. B.L. Lewis and F.F. Kretschmer, Jr., published work of limited distribution, dating back toFeb. 1974.

4. W.F. Gabriel, "Building Block for an Orthonormal-Lattice-Filter Adaptive Network," NRLReport 8409, July 1980.

5. M.A. Alam, "Orthonormal Lattice Filter-A Multistage, Multichannel Estimation Technique,"Geophysics 43, 1368-1383 (1978).

6. B. Friedlander, "Lattice Filters for Adaptive Processing," Proc. IEEE 70(8), 829-867 (1982).

7. Karl Gerlach and F.F. Kretschmer, Jr., "Convergence Rate of a Gram-Schmidt Canceller,"NRL Report 9051.

8. I.S. Reed, J.D. Mallett, and L.E. Brennan, "Rapid Convergence Rate in Adaptive Arrays,"IEEE Trans. AES-10. 853-863 (1974).

9. C.W. Jim, "'A Comparison of Two LMS Constrained Optimal Array Structures," Proc. IEEE65(12), 1730-1731 (1977).

,0. L.J. Griffiths and C.W. Jim, "An Alternative Approach to Linearly Constrained AdaptiveBeamforming," IEEE Trans. AP-30, 27-34 (1982).

11. K.M. Buckley and L.J. Griffiths, "An Adaptive Generalized Sidelobe Canceller with DerivativeConstraints," IEEE Trans. AP-34, 311-319 (1986).

12. Karl Gerlach. "Fast Orthogonalization Networks," IEEE Trans. AP-34, 458462 (1986). ,,

13. L.E. Brennan and IS. Reed, "Digital Adaptive Arrays with Weights Computed From andApplied to the Same Data Set," Proceedings of the 1980 Adaptive Antenna Symposium.RADC-TR-378, 1, Dec. 1980.

e:

200

--'-

%'('

S."

20 -.

Page 27: RDRPTIVE ARRAY(U) NAYAL RESEARCH OF A LINEARLY LAB I m … · 2014. 9. 28. · MARI I 1 98 p i '. ' Approved for public release; distribution unlimited. ... JUN 86 Prev,ous edtons

pp',f4 72Ea

I"

r CK

LAe

-", ": '' " "i" " ". "-' -". .7 .'. ,. .. " '-. .'..'..'.e" "". " " ."""" ".-" " .'" " " "' -".'" ' -'" "." - " -' -' ." "-- -"" "----" -"'' .

","' "-" '-- " -"- - "- ' ,'_" - "- -'-" "-.,- "...' -- , " ". "- "-.- .. " " - "- - .v - , -. .. " WW.. r vW '..--. ""-" ,"," .", "-" ".-J, .. I . -. . ... " ,- " ," . .".'. .. .. r.p -.. " -' -' -_- ," ." ." d'." .'. . ,-_- 1" ." ,'''. " ."". , .- ,I , - ." ."".'. '..


Recommended