+ All Categories
Home > Documents > Automatic target recognition employing signal compression

Automatic target recognition employing signal compression

Date post: 02-Oct-2016
Category:
Upload: abhijit
View: 215 times
Download: 2 times
Share this document with a friend
10
Automatic target recognition employing signal compression Pradeep Ragothaman, 1 Wasfy B. Mikhael, 1, * Robert R. Muise, 2 and Abhijit Mahalanobis 2 1 School of Electrical Engineering and Computer Science, University of Central Florida, P.O. Box 160000, Orlando, Florida 32816, USA 2 Lockheed Martin, MFC, 5600 Sand Lake Road, Orlando, Florida 32819-8907, USA *Corresponding author: [email protected] Received 16 January 2007; accepted 2 March 2007; posted 26 March 2007 (Doc. ID 78845); published 6 July 2007 Quadratic correlation filters (QCFs) have been used successfully to detect and recognize targets embed- ded in background clutter. Recently, a QCF called the Rayleigh quotient quadratic correlation filter (RQQCF) was formulated for automatic target recognition (ATR) in IR imagery. Using training images from target and clutter classes, the RQQCF explicitly maximized a class separation metric. What we believe to be a novel approach is presented for ATR that synthesizes the RQQCF using compressed images. The proposed approach considerably reduces the computational complexity and storage require- ments while retaining the high recognition accuracy of the original RQQCF technique. The advantages of the proposed scheme are illustrated using sample results obtained from experiments on IR imagery. © 2007 Optical Society of America OCIS codes: 100.0100, 100.2000, 100.5010, 100.2960. 1. Introduction The field of automatic target recognition (ATR) has received considerable attention over the years. The process of detecting and classifying objects of inter- est embedded in background clutter is a challenging task, especially since clutter is often the domi- nant component of forward looking infrared (FLIR), synthetic aperture radar (SAR), and laser radar (LADAR) images. There are many approaches to ATR that have been reported in the literature. Some tech- niques are based on modeling target signatures usu- ally obtained after the segmentation of images to extract objects of interest [1–9]. Others involve fea- ture extraction to implicitly recognize targets [10 – 12]. In addition, many techniques have been reported that use neural networks, statistical methods, or a combination thereof [12–21]. Among the methods that do not require segmentation, linear correlation filters have been both popular and successful [22,23]. These filters are inherently shift invariant and can be efficiently implemented either digitally or optically. On the other hand, multiple linear filters are re- quired to account for wide variations of the target(s). In addition, each of the filters is usually synthesized separately leading to the computationally expensive and error prone task of searching multiple correlation planes independently. Recently, ATR using quadratic correlation filters (QCFs) have received considerable attention [24,25]. These filters operate directly on image data without requiring segmentation or fea- ture extraction and retain the inherent shift invari- ance of linear correlation filters. In addition, they considerably simplify the postprocessing complexity required when using multiple linear correlation fil- ters. The Rayleigh quotient QFC (RQQCF) tech- nique, which was recently proposed, formulates the class separation metric as a Rayleigh quotient that is optimized by the QCF solution [26,27]. As a result, the means of the two classes are well separated while simultaneously ensuring that the variance of each class is small. In this paper, what we believe to be a novel transform domain approach is proposed for the RQQCF. The approach called the transform domain RQQCF (TDRQQCF) results in a considerable reduc- tion in computational requirements while retaining the high recognition accuracy of the spatial domain 0003-6935/07/214702-10$15.00/0 © 2007 Optical Society of America 4702 APPLIED OPTICS Vol. 46, No. 21 20 July 2007
Transcript
Page 1: Automatic target recognition employing signal compression

Automatic target recognition employing signalcompression

Pradeep Ragothaman,1 Wasfy B. Mikhael,1,* Robert R. Muise,2 and Abhijit Mahalanobis2

1School of Electrical Engineering and Computer Science, University of Central Florida, P.O. Box 160000,Orlando, Florida 32816, USA

2Lockheed Martin, MFC, 5600 Sand Lake Road, Orlando, Florida 32819-8907, USA

*Corresponding author: [email protected]

Received 16 January 2007; accepted 2 March 2007;posted 26 March 2007 (Doc. ID 78845); published 6 July 2007

Quadratic correlation filters (QCFs) have been used successfully to detect and recognize targets embed-ded in background clutter. Recently, a QCF called the Rayleigh quotient quadratic correlation filter(RQQCF) was formulated for automatic target recognition (ATR) in IR imagery. Using training imagesfrom target and clutter classes, the RQQCF explicitly maximized a class separation metric. What webelieve to be a novel approach is presented for ATR that synthesizes the RQQCF using compressedimages. The proposed approach considerably reduces the computational complexity and storage require-ments while retaining the high recognition accuracy of the original RQQCF technique. The advantagesof the proposed scheme are illustrated using sample results obtained from experiments on IRimagery. © 2007 Optical Society of America

OCIS codes: 100.0100, 100.2000, 100.5010, 100.2960.

1. Introduction

The field of automatic target recognition (ATR) hasreceived considerable attention over the years. Theprocess of detecting and classifying objects of inter-est embedded in background clutter is a challengingtask, especially since clutter is often the domi-nant component of forward looking infrared (FLIR),synthetic aperture radar (SAR), and laser radar(LADAR) images. There are many approaches to ATRthat have been reported in the literature. Some tech-niques are based on modeling target signatures usu-ally obtained after the segmentation of images toextract objects of interest [1–9]. Others involve fea-ture extraction to implicitly recognize targets [10–12]. In addition, many techniques have been reportedthat use neural networks, statistical methods, or acombination thereof [12–21]. Among the methodsthat do not require segmentation, linear correlationfilters have been both popular and successful [22,23].These filters are inherently shift invariant and can beefficiently implemented either digitally or optically.

On the other hand, multiple linear filters are re-quired to account for wide variations of the target(s).In addition, each of the filters is usually synthesizedseparately leading to the computationally expensiveand error prone task of searching multiple correlationplanes independently. Recently, ATR using quadraticcorrelation filters (QCFs) have received considerableattention [24,25]. These filters operate directly onimage data without requiring segmentation or fea-ture extraction and retain the inherent shift invari-ance of linear correlation filters. In addition, theyconsiderably simplify the postprocessing complexityrequired when using multiple linear correlation fil-ters. The Rayleigh quotient QFC (RQQCF) tech-nique, which was recently proposed, formulates theclass separation metric as a Rayleigh quotient that isoptimized by the QCF solution [26,27]. As a result,the means of the two classes are well separated whilesimultaneously ensuring that the variance of eachclass is small. In this paper, what we believe to be anovel transform domain approach is proposed for theRQQCF. The approach called the transform domainRQQCF (TDRQQCF) results in a considerable reduc-tion in computational requirements while retainingthe high recognition accuracy of the spatial domain

0003-6935/07/214702-10$15.00/0© 2007 Optical Society of America

4702 APPLIED OPTICS � Vol. 46, No. 21 � 20 July 2007

Page 2: Automatic target recognition employing signal compression

RQQCF. The remainder of the paper is organizedas follows. Section 2 gives a brief description of thespatial domain RQQCF technique, Section 3 de-

scribes the proposed TDRQQCF technique, Section4 contains the results of extensive simulations, andSection 5 concludes the paper. In the following sec-tions, the quantities in lowercase with an under-score are vectors, and the quantities in upper caseare matrices. Quantities in the transform domainare distinguished by a t in the subscript.

2. RQQCF Technique

In the RQQCF technique, the QCF coefficient matrixT is assumed to take the form

T � �i�1

n

wiwiT, (1)

where wi, 1 � i � n form an orthonormal basis set.The objective of the technique is to determine thesebasis functions such that the separation between thetwo classes, say X and Y, is maximized. The output ofthe QCF to an input vector u is given by

� � uTTu. (2)

The objective is to maximize the ratio,

J�w� �E1��� � E2���E1��� � E2���

��i�1

n

wi�Rx � Ry�wiT

�i�1

n

wi�Rx � Ry�wiT, (3)

where Ej�·� is the expectation operator over the Jthclass, and Rx and Ry are the correlation matrices fortargets and clutter, respectively. Taking the deriva-tive of Eq. (3) with respect to wi, we get

�Rx � Ry��1�Rx � Ry�wi � �iwi. (4)

Let

A � �Rx � Ry��1�Rx � Ry�, (5)

Fig. 1. Sample frames from (a) Video 1, (b) Video 2, (c) Video 3,and (d) Video 4.

Fig. 2. VIDEO 1. (a) DCT coefficients obtained by converting a 2D target chip into a 1D vector before applying the 1D DCT, (b) DCTcoefficients obtained by first transforming the chip using the 2D DCT and then converting it to a 1D vector.

Table 1. Number of Frames and Number of Target and Clutter Chips,M, for Each Video

VIDEO 1 VIDEO 2 VIDEO 3 VIDEO 4

Number of frames 388 778 410 300Number of target and

clutter chips, M409 763 405 391

20 July 2007 � Vol. 46, No. 21 � APPLIED OPTICS 4703

Page 3: Automatic target recognition employing signal compression

thus wi is an eigenvector of A with eigenvalue �i. Itshould be noted that J�w� is in the form of a Rayleighquotient, which is maximized by the dominant eigen-vector of A.

In practice, M target and M clutter training sub-images, referred to as chips, are obtained from IRimagery. Each chip, having dimensions �n � �n, isconverted into a 1D vector of dimensions n � 1 by

Fig. 3. VIDEO 1. (a) DCT coefficients obtained by converting a 2D clutter chip into a 1D vector before applying the 1D DCT, (b) DCTcoefficients obtained by first transforming the chip using the 2D DCT and then converting to a 1D vector.

Table 2. VIDEO 1: Average Energy in Different Transformed and Truncated Matrices of the Target and Clutter Sets

Average Energy In

8 � 8 9 � 9 10 � 10 11 � 11 12 � 12 13 � 13 14 � 14 15 � 15 16 � 16

Target chips 88.5799 91.8736 94.2695 95.8867 97.036 97.8537 98.5065 99.1296 100Clutter chips 77.0727 79.5609 81.8025 83.8621 85.9592 87.9347 90.4363 93.6521 100

Table 3. VIDEO 2: Average Energy in Different Transformed and Truncated Matrices of the Target and Clutter Sets

Average Energy In

8 � 8 9 � 9 10 � 10 11 � 11 12 � 12 13 � 13 14 � 14 15 � 15 16 � 16

Target chips 95.3762 96.409 97.35 97.9219 98.4602 98.8663 99.2888 99.5657 100Clutter chips 87.2741 89.2236 90.841 92.2614 93.6029 94.8183 96.5067 98.578 100

Table 4. VIDEO 3: Average Energy in Different Transformed and Truncated Matrices of the Target and Clutter Sets

Average Energy In

8 � 8 9 � 9 10 � 10 11 � 11 12 � 12 13 � 13 14 � 14 15 � 15 16 � 16

Target chips 93.86 95.4257 96.5763 97.443 98.1287 98.6852 99.1279 99.5433 100Clutter chips 80.3775 83.1199 85.6899 88.1777 90.6858 93.0544 95.2862 97.7443 100

Table 5. VIDEO 4: Average Energy in Different Transformed and Truncated Matrices of the Target and Clutter Sets

Average Energy In

8 � 8 9 � 9 10 � 10 11 � 11 12 � 12 13 � 13 14 � 14 15 � 15 16 � 16

Target chips 94.4088 96.195 97.6136 98.3129 98.8124 99.1503 99.4638 99.649 100Clutter chips 87.7344 89.888 91.8235 93.5835 95.1314 96.409 97.5693 98.7895 100

4704 APPLIED OPTICS � Vol. 46, No. 21 � 20 July 2007

Page 4: Automatic target recognition employing signal compression

concatenating its columns. Target and clutter train-ing sets of size n � M each are obtained by placing therespective vectors in matrices. The n � n autocorre-lation matrices of the target and clutter sets, Rx andRy, are computed and used to obtain A according toEq. (5). As a result, the eigenvalues of A vary from �1to �1. The dominant eigenvalues for clutter, �ci, areclose to or equal to �1, and those for targets, �ti, areclose to or equal to �1. The RQQCF coefficients, wci

and wti, are mapped to the corresponding eigenval-ues. In the original paper, the RQQCF is correlatedwith an input scene to obtain a correlation surfacefrom which the existence and the location of thetarget is deduced. An efficient method to performthe correlation is discussed in the original paper[26]. In this paper, though, to identify a data pointas target or clutter, the sum of the absolute value ofthe k inner products of a data point with wti and wci,pt and pc, are calculated. If pt � pc, the data point is

identified as a target, otherwise, it is identified asclutter.

3. Proposed Transform Domain RQQCF (TDRQQCF)

As can be seen from the previous section, the RQQCFtechnique operates on spatial domain data. Fur-thermore, each 2D data chip in the spatial domainis converted into a 1D vector by the lexicographicalordering of the columns of the chip. This leads totwo interrelated issues. First, the spatial correla-tion in the 2D chip is lost by converting it into avector as described above. Second, the dimension-ality of the system is increased considerably. Oneway to tackle both these issues simultaneouslyis to synthesize the RQQCF in the transform orfrequency domain. Transforms capture the spatialcorrelation in images and decorrelate the pixels.Consequently, if the transforms are appropriatelyselected, they compact the energy in the image in

Fig. 4. Distribution of eigenvalues in (a) spatial domain RQQCFmethod, (b) TDRQQCF method for chips compressed to 8 � 8.

Fig. 5. VIDEO 1. Response of (a) representative target vector and(b) representative clutter vector, versus the index of the dominanteigenvectors (spatial domain).

20 July 2007 � Vol. 46, No. 21 � APPLIED OPTICS 4705

Page 5: Automatic target recognition employing signal compression

relatively few coefficients. Thus spatial domaindata is transformed into an efficient and compactrepresentation.

The discrete cosine transform (DCT) [28] is widelyused. It has desirable properties as well as fast im-plementations. The DCT used in this work is definedfor an input image A and output image B, each of sizeM � N, as follows

Bpq � pq �m�0

M�1

�n�0

N�1

Amn cos�2m � 1�p

2M cos�2n � 1�q

2N ,

0 � p � M � 1, 0 � q � N � 1, (6)

For p � 0, p � 1��M, and for 1 � p � M � 1, p

� 2��M. For q � 0, q � 1��N, and for 1 � q � N� 1, q � 2��N. The TDRQQCF technique proceedsas follows:

(1) Each target chip, Cx, and clutter chip, Cy, isfirst transformed using the DCT to obtain Cxt and Cyt.It is seen that most of the energy in Cxt and Cyt isconcentrated in the top left corner. In addition, thedistributions of energy for targets and clutter differfrom each other.

(2) Each Cxt and Cyt is truncated to an appropriatesize and converted to a 1D vector by lexicographicallyordering the columns. Thus, vectors of reduced di-mensionality compared with the spatial domain caseare obtained. In addition, these vectors are very effi-cient representations of the spatial domain chips.

(3) The autocorrelation matrices, Rxt and Ryt, arecomputed and used to obtain At according to Eq. (5).We note that since the dimensions of the target andclutter vectors are much smaller than in the spatialdomain case, the dimensionality of the autocorrela-tion matrices, Rxt and Ryt, and therefore At, are cor-respondingly reduced.

Fig. 6. VIDEO 1. Response of (a) representative target vector and(b) representative clutter vector, versus the index of the dominanteigenvectors derived from the truncated chips �8 � 8� in the DCTdomain.

Fig. 7. VIDEO 2. Response of (a) representative target vector and(b) representative clutter vector, versus the index of the dominanteigenvectors (spatial domain).

4706 APPLIED OPTICS � Vol. 46, No. 21 � 20 July 2007

Page 6: Automatic target recognition employing signal compression

(4) The eigenvalue decomposition (EVD) is per-formed on At to obtain the QCF coefficients. The QCFcoefficients thus obtained are in the DCT domain.

In addition to reduced dimensionality, there is an-other advantage to the TDRQQCF. Often, in practice,in applications of techniques such as the RQQCF, oneencounters low-rank matrices, which give rise to nu-merical problems. This is because the number of datapoints available for training is much smaller than thedimensionality of each data point. On the other hand,TDRQQCF overcomes this problem by reducing thedimensionality of the data points.

4. Simulation Results

The TDRQQCF was tested on various IR video se-quences provided by Lockheed Martin, Missile andFire Control (LMMFC). Sample results are presented

from four video sequences to illustrate the perfor-mance of the proposed technique. Figure 1 showssample frames from the different videos.

Table 1 shows the number of frames in each videoand the number of target and clutter-chips, M, ob-tained for each video. Target chips are obtainedfrom each frame of a video using ground truth datathat is available. For clutter, chips are picked fromall areas of each frame of the video except the ar-ea(s) where the target(s) is�are located. While thisresults in a larger number of clutter chips thantarget chips, for our simulations, the number ofclutter chips is chosen to be equal to the number oftarget chips for convenience sake. Note that the sizeof the autocorrelation matrices depends only on thedimension of the data points and not on the numberof data points.

The size of each chip is 16 � 16, i.e., �n � 16 andn � 256. Figure 2 illustrates the advantage of trans-

Fig. 8. VIDEO 2. Response of (a) representative target vector, and(b) representative clutter vector, versus the index of the dominanteigenvectors derived from the truncated chips �8 � 8� in the DCTdomain.

Fig. 9. VIDEO 3. Response of (a) representative target vector and(b) representative clutter vector, versus the index of the dominanteigenvectors (spatial domain).

20 July 2007 � Vol. 46, No. 21 � APPLIED OPTICS 4707

Page 7: Automatic target recognition employing signal compression

forming a target chip from VIDEO 1, using the 2DDCT before converting the transformed chip into a1D vector, Fig. 2(b), versus converting the chip into a1D vector first and then applying the 1D DCT, Fig.2(a). It can be seen that first transforming the 2D chipresults in better energy compaction, leading to effi-cient representation. Figure 3 shows the correspond-ing plots for a clutter point from VIDEO 1.

Tables 2–5 show, for IR Videos 1–4, respectively,the average energy in subimages of different sizes,retaining the low spatial frequency region, of thetransformed 16 � 16 chips. From these tables, 75%–90% of the energy is concentrated in 25% of the trans-formed chips. Also, the target energy is slightly morecompressed in the transform domain. In addition, theenergy distribution for target chips is different fromthat for clutter chips.

At this point, if no truncation is performed, and theoriginal RQQCF technique is applied, the same set ofeigenvalues as in the case of the spatial domainRQQCF is obtained. For example, for Video 1, 12dominant eigenvalues (six positive and six negative)for both cases are listed below, (k � 12).

Y �i (Spatial Domain):�0.9975, �0.9641, �0.9559, �0.9378, �0.9283,�0.9221, 0.9934, 0.9938, 0.9962, 0.9971, 0.9981,0.9985.

Y �i (DCT Domain):�0.9975, �0.9641, �0.9559, �0.9378, �0.9283,�0.9221, 0.9934, 0.9938, 0.9962, 0.9971, 0.9981,0.9985.

Figure 4(a) shows the eigenvalue distribution forthe spatial domain RQQCF technique while Fig. 4(b)

Fig. 10. VIDEO 3. Response of (a) representative target vectorand (b) representative clutter vector, versus the index of the dom-inant eigenvectors derived from the truncated chips �8 � 8� in theDCT domain.

Fig. 11. VIDEO 4. Response of (a) representative target vectorand (b) representative clutter vector, versus the index of the dom-inant eigenvectors (spatial domain).

4708 APPLIED OPTICS � Vol. 46, No. 21 � 20 July 2007

Page 8: Automatic target recognition employing signal compression

shows the eigenvalue distribution for the TDRQQCFtechnique when the chips are compressed from 16 �16 to 8 � 8.

Sample results are presented for the case when thetransformed target and clutter chips, Cxt and Cyt, re-spectively, are truncated to an 8 � 8 size. This means

that �n � 8 and n � 64. The 12 dominant eigenvaluesamong the 64 are listed below, (k � 12).

Y �i (DCT Domain):�0.9930, �0.8648, �0.7825, �0.7472, �0.7472,�0.6427, 0.9642, 0.9722, 0.9746, 0.9878, 0.9943,0.9952.

As explained in Section 2, to identify a data point astarget or clutter, the sum of the absolute values of thek inner products of a data point with wti and wci, pt

and pc, are calculated. If pt � pc, the data point isidentified as a target. Otherwise, it is identified asclutter. We will refer to the absolute values of theseinner products as the response of that particular datapoint. Figures 5–12 show the response of the repre-sentative target and clutter vectors plotted againstthe index of the dominant eigenvectors, in both thespatial and the DCT domains. Eigenvectors 1–6 aredominant eigenvectors for clutter while eigenvectors7–12 are dominant eigenvectors for targets. Figures 5and 6 correspond to VIDEO 1, Figs. 7 and 8 corre-spond to VIDEO 2, Figs. 9 and 10 correspond toVIDEO 3, and Figs. 11 and 12 correspond to VIDEO4. To explain further, Fig. 5 shows, for VIDEO 1, theabsolute value of the inner product of the represen-tative target and clutter vectors with the eigenvectorscorresponding to the original dominant eigenvalues(spatial domain), respectively, versus the index of theeigenvectors. Figure 6 shows, for VIDEO 1, the abso-lute value of the inner product of the same target andclutter vectors with the eigenvectors corresponding tothe dominant eigenvalues obtained in the DCT do-main, respectively, versus the index of the eigenvec-

Fig. 12. VIDEO 4. Response of (a) representative target vectorand (b) representative clutter vector, versus the index of the dom-inant eigenvectors derived from the truncated chips �8 � 8� in theDCT domain.

Fig. 13. Misclassified target chip form VIDEO 4.

Fig. 14. Sample representative target chip form VIDEO 4.

Table 6. Recognition Accuracy of the Spatial Domain RQQCF and theTDRQQCF for All Four Videos

Target Clutter

RQQCF TDRQQCF RQQCF TDRQQCF

VIDEO 1 409�409 409�409 409�409 409�409VIDEO 2 763�763 763�763 763�763 763�763VIDEO 3 405�405 405�405 405�405 405�405VIDEO 4 390�391 390�391 390�391 390�391

20 July 2007 � Vol. 46, No. 21 � APPLIED OPTICS 4709

Page 9: Automatic target recognition employing signal compression

tors. This is repeated for the other three videos inFigs. 7–12.

A close look at the plots reveals the following: (i)The magnitude of each of the responses, inner prod-ucts, in the DCT domain is much higher than thecorresponding magnitude in the spatial domain and(ii) The magnitude of �pt � pc� is also much higher inthe DCT domain than in the spatial domain. Inother words, the separation between targets andclutter is also much higher in the DCT domain thanin the spatial domain. This means that the require-ments on the threshold to decide if a chip is a targetor clutter can be relaxed considerably. Although theplots shown are for a few randomly chosen datapoints from the different videos, it was found thatthe TDRQQCF consistently produces much largerresponses and target–clutter separation than thespatial domain RQQCF for all data points.

Table 6 summarizes the recognition accuracy ofthe spatial domain RQQCF and the TDRQQCF forall four videos. Each row in the table shows, for aparticular video, the number of target and clutterchips recognized correctly by the RQQCF and theTDRQQCF.

It is seen that the TDRQQCF retains the excellentrecognition accuracy of the spatial domain RQQCF.Also, both the RQQCF and TDRQQCF fail to recog-nize the same chip in VIDEO 4. This particular chipis shown in Fig. 13. The reason is that this chip looksmore like a clutter chip than a target chip. For com-parison, a sample representative target chip from thevideo is shown in Fig. 14.

The overall reduction in computational andstorage requirements of the TDRQQCF over theRQQCF is obtained while retaining its recognitionaccuracy. The RQQCF involves the inversion andEVD of large matrices. The computational complex-ity for each of these operations is of the order O�n3�,where n is the dimensionality of the autocorrelationmatrices. On the other hand, by using the TDRQQCF,where compressed representations are used for targetand clutter, large savings are obtained. Table 7 com-pares the spatial domain RQQCF with the TDRQQCFin terms of storage and computational complexity.

The computational complexity for the 2j � 2j DCTis approximately 2j � �2j�1 � j � 2� [29–31]. FromTable 7, it can be easily shown that the overall com-putational complexity including computing the DCT

and the storage requirements of the TDRQQCF is stillmuch smaller than the spatial domain RQQCF. Inaddition, for the TDRQQCF, the storage and compu-tational savings increase as the chip size increases.

5. Conclusion

What we believe to be a novel transform domain Ray-leigh quotient quadratic correlation filter (TDRQQCF)is proposed. The improved performance of theTDRQQCF results from compressing the data in thetransform domain. Consequently, this leads to consid-erable reduction in computational complexity and stor-age requirements over the spatial domain RQQCFtechnique while retaining excellent recognition accu-racy. It is worthwhile to note another advantage of thenew technique, namely, the TDRQQCF acts as a low-pass filter that removes noise. Consequently, the sep-arability between targets and clutter improves. Inaddition, the method overcomes the problems of thesmall training sets that are often encountered in prac-tice. This is confirmed by extensive simulation results.Sample results are given.

References1. C. F. Olson and D. P. Huttenlocher, “Automatic target recog-

nition by matching oriented edge pixels,” IEEE Trans. ImageProcess. 6, 103–113 (1997).

2. C. E. Daniell, D. H. Kemsley, W. P. Lincoln, W. A. Tackett, andG. A. Baraghimian, “Artificial neural networks for automatictarget recognition,” Opt. Eng. 31, 2521–2531 (1992).

3. J. A. O’Sullivan, M. D. DeVore, V. Kedia, and M. I. Miller,“SAR ATR performance using a conditionally Gaussianmodel,” IEEE Trans. Aerosp. Electron. Syst. 37, 91–108 (2001).

4. S. G. Sun, D. M. Kwak, W. B. Jang, and D. J. Kim, “Smalltarget detection using center-surround difference with locallyadaptive threshold,” in Proceedings of the 4th InternationalSymposium on Image and Signal Processing and Analysis,ISPA 2005, September 2005, pp. 402–407.

5. C. F. Olson, D. P. Huttenlocher, and D. M. Doria, “Recognitionby matching with edge location and orientation,” in Proceed-ings of the ARPA Image Understanding Workshop, 1996, pp.1167–1174.

6. F. Sadjadi, “Object recognition using coding schemes,” Opt.Eng. 31, 2580–2583 (1992).

7. J. G. Verly, R. L. Delanoy, and D. E. Dudgeon, “Model-basedsystem for automatic target recognition from forward-lookinglaser-radar imagery,” Opt. Eng. 31, 2540–2552 (1992).

8. B. Bhanu and J. Ahn, “A system for model-based recognition ofarticulated objects,” in Proceedings of the Fourteenth Interna-

Table 7. Storage and Computational Complexity of the Spatial Domain RQQCF Versus that for the TDRQQCFa

RQQCF TDRQQCFPercentage of Savings

Using TDRQQCFa

Number of storage locations for chips 2 � M � �n � �n 2 � M � �k � �k, k � n 75%Number of storage locations for

autocorrelation matrices2 � n � n 2 � k � k 93.75%

Complexity of inversionb O(n3) O(k3) 98%c

Complexity of EVDb O(n3) O(k3) 98%c

aFrom [29] for M � 409, n � 256, k � 64.bNumber of multiplications.cApproximately.

4710 APPLIED OPTICS � Vol. 46, No. 21 � 20 July 2007

Page 10: Automatic target recognition employing signal compression

tional Conference on Pattern Recognition (IEEE, 1998), Vol. 2,pp. 1812–1815.

9. S. Z. Der, Q. Zheng, R. Chellappa, B. Redman, and H. Mah-moud, “View based recognition of military vehicles in LADARimagery using CAD model matching,” in Image Recognitionand Classification, Algorithms, Systems and Applications, B.Javidi, ed. (Dekker, 2002), pp. 151–187.

10. J. Starch, R. Sharma, and S. Shaw, “A unified approach tofeature extraction for model based ATR,” Proc. SPIE 2757,294–305 (1997).

11. D. Casasent and R. Shenoy, “Feature space trajectory for dis-torted object classification and pose estimation in SAR,” Opt.Eng. 36, 2719–2728 (1997).

12. D. P. Kottke, J. Fwu, and K. Brown, “Hidden Markov model-ling for automatic target recognition,” presented at The Con-ference Record of the Thirty-First Asilomar Conference onSignals, Systems, and Computers, 2–5 Nov. 1997 Vol. 1, pp.859–863.

13. S. A. Rizvi and N. M. Nasrabadi, “Automatic target recognitionof cluttered FLIR imagery using multistage feature extractionand feature repair,” Proc. SPIE 5015, 1–10 (2003).

14. L. A. Chan, S. Z. Der, and N. M. Nasrabadi, “Neural basedtarget detectors for multi-band infrared imagery,” in ImageRecognition and Classification, Algorithms, Systems and Ap-plications, B. Javidi, ed. (Dekker, 2002), pp. 1–36.

15. D. Torreiri, “A linear transform that simplifies and improvesneural network classifiers,” in Proceedings of InternationalConference on Neural Networks, 1996, Vol. 3, pp. 1738–1743.

16. J. H. Friedman, “Greedy function approximation: a gradientboosting machine,” Ann. Stat. 29, 1189–1232 (2001).

17. H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V.Vapnik, “Support vector regression machines,” Adv. NeuralInf. Process. Syst. 9, 155–161 (1997).

18. H. C. Chiang, R. L. Moses, and W. W. Irving, “Performanceestimation of model-based automatic target recognition usingattributed scattering center features,” in Proceedings of theInternational Conference on Image Analysis and Processing(IEEE, 1999), pp. 303–308.

19. S. A. Rizvi and N. M. Nasrabadi, “Fusion techniques for auto-

matic target recognition,” Presented at The 32nd Applied Im-agery Pattern Recognition Workshop (AIPR’03), 2003, pp. 27–32.

20. D. Casasent and Y. C. Wang, “Automatic target recognitionusing new support vector machine,” in Proceedings of the 2005IEEE International Joint Conference on Neural Networks(IJCNN, 2005), Vol. 1, pp. 84–89.

21. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification,2nd ed. (Wiley-Interscience, 2000).

22. B. V. K. Vijaya Kumar, “Tutorial survey of composite filter de-signs for optical correlators,” Appl. Opt. 31, 4773–4801 (1992).

23. A. Mahalanobis, B. V. K. Vijaya Kumar, S. R. F. Sims, and J.Epperson, “Unconstrained correlation filters,” Appl. Opt. 33,3751–3759 (1994).

24. X. Huo, M. Elad, A. G. Flesia, R. R. Muise, S. R. Stanfill, J.Friedman, B. Popescu, J. Chen, A. Mahalanobis, and D. L.Donoho, “Optimal reduced-rank quadratic classifiers using theFukunaga–Koontz transform with applications to automatedtarget recognition,” Proc. SPIE 5094, 59–72 (2003).

25. S. R. F. Sims and A. Mahalanobis, “Performance evaluation ofquadratic correlation filters for target detection and discrimi-nation in infrared imagery,” Opt. Eng. 43, 1705–1711 (2004).

26. A. Mahalanobis, R. R. Muise, and S. R. Stanfill, “Quadraticcorrelation filter design methodology for target detection andsurveillance applications,” Appl. Opt. 43, 5198–5205 (2004).

27. R. Muise, A. Mahalanobis, R. Mohapatra, X. Li, D. Han, andW. Mikhael, “Constrained quadratic correlation filters for tar-get detection,” Appl. Opt. 43, 304–314 (2004).

28. K. R. Rao and P. Yip, Discrete Cosine Transform: Algorithms,Advantages, Applications (Academic, 1990).

29. E. Feig and S. Winograd, “On the multiplicative complexity ofdiscrete cosine transforms,” IEEE Trans. Inf. Theory 38, 1387–1391 (1992).

30. H. R. Wu and Z. Man, “Comments on fast algorithms andimplementation of 2D discrete cosine transform,” IEEE Trans.Circuits Syst. Video Technol. 8, 128–129 (1998).

31. C. Chen, B. Liu, and J. Yang, “Direct recursive structuresfor computing radix-r two-dimensional DCT�IDCT�DST�IDST,” IEEE Trans. Circuits Syst. 51, 2017–2030 (2004).

20 July 2007 � Vol. 46, No. 21 � APPLIED OPTICS 4711


Recommended