+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace Conference - Seattle,...

[American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace Conference - Seattle,...

Date post: 12-Dec-2016
Category:
Upload: nevin
View: 212 times
Download: 0 times
Share this document with a friend
19
Radar - Predator Cueing Experiment Lawrence A. Bush, Steven Rak and Dennis Ehn MIT Lincoln Laboratory {larrybush, rak, dehn}@ll.mit.edu Nevin McConnell Apple Inc. [email protected] We explore using a radar based decision support algorithm to cue a Predator unmanned air vehicle video camera operator. We created an integrated sensing and decision support test-bed comprised of real-time interactive virtual world simulations, operator-in-the-loop experimentation, and a distributed data collection environment. We created and tested a convoy detection decision support algorithm. Our results provide strong evidence that accurate cueing can increase the effectiveness of unmanned air vehicle based reconnaissance on a military convoy search task. I. Introduction The Predator Unmanned Air Vehicle (UAV) is ideal for a wide variety of surveillance tasks, such as identifying activity of interest, because its video output is readily interpretable by human operators. However, due to its narrow field of view it can only monitor a small area at a time. Consequently, the Predator platform alone is not appropriate for wide area search and monitoring. In contrast, moving target indication radar (MTI) can detect moving objects over a wide area. The data it provides is exploitable by automated decision support algorithms; however, its information content is limited. Therefore, an airborne MTI surveillance system can detect activity over a large geographic area, but it cannot identify this activity with certainty. Through an operator-in-the-loop pilot study/experiment, we explored the complementary strengths of these sensing modalities. The capability being explored is the use of an MTI decision support algorithm to cue a Predator UAV video camera operator. The algorithm under consideration is convoy detection. The automated decision support algorithm provides a set of cues to the operator. A cue provides the location of the suspected activity of interest; specifically, a convoy. The cues are used by the operator to find and identify the activity of interest. However, algorithms alone are not the answer. A semi-automated system does not necessarily provide the desired improvement in human performance. To do so, the algorithm must provide usable information to the UAV video camera operator. To address this issue, we have created an integrated sensing and decision support test-bed comprised of real-time interactive virtual world simulations, operator-in-the-loop experimentation and a distributed data collection environment. Our results provide strong evidence that accurate cueing can increase the effectiveness of UAV based reconnaissance. Our paper will address the experimental framework as well as the tested concepts, algorithms and results. II. Problem The problem that we are addressing is decision support for counter insurgency. Counterinsurgency operations require persistent surveillance over a wide area, which can be accomplished using MTI data. However, we also need a sensor which can positively identify a potential threat. MTI data cannot provide positive identification, however, the Predator UAV video sensor is ideal for the job, because it can get in close and the video data it produces is readily interpretable by human operators. However, since the Predator UAV video sensor has such a narrow field of view, it cannot survey a large area efficiently. Therefore, we want to take advantage of the complementary strengths of these two sensing modalities by networking them together and using the MTI data to direct the attention of the UAV video camera operator. 1 of 19 American Institute of Aeronautics and Astronautics AIAA Infotech@Aerospace Conference <br>and<br>AIAA Unmanned...Unlimited Conference 6 - 9 April 2009, Seattle, Washington AIAA 2009-1858 Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Transcript

Radar - Predator Cueing Experiment

Lawrence A. Bush, Steven Rak and Dennis Ehn

MIT Lincoln Laboratory

{larrybush, rak, dehn}@ll.mit.edu

Nevin McConnell

Apple Inc.

[email protected]

We explore using a radar based decision support algorithm to cue a Predator unmannedair vehicle video camera operator. We created an integrated sensing and decision supporttest-bed comprised of real-time interactive virtual world simulations, operator-in-the-loopexperimentation, and a distributed data collection environment. We created and testeda convoy detection decision support algorithm. Our results provide strong evidence thataccurate cueing can increase the effectiveness of unmanned air vehicle based reconnaissanceon a military convoy search task.

I. Introduction

The Predator Unmanned Air Vehicle (UAV) is ideal for a wide variety of surveillance tasks, such asidentifying activity of interest, because its video output is readily interpretable by human operators. However,due to its narrow field of view it can only monitor a small area at a time. Consequently, the Predator platformalone is not appropriate for wide area search and monitoring. In contrast, moving target indication radar(MTI) can detect moving objects over a wide area. The data it provides is exploitable by automated decisionsupport algorithms; however, its information content is limited. Therefore, an airborne MTI surveillancesystem can detect activity over a large geographic area, but it cannot identify this activity with certainty.

Through an operator-in-the-loop pilot study/experiment, we explored the complementary strengths ofthese sensing modalities. The capability being explored is the use of an MTI decision support algorithm tocue a Predator UAV video camera operator. The algorithm under consideration is convoy detection. Theautomated decision support algorithm provides a set of cues to the operator. A cue provides the locationof the suspected activity of interest; specifically, a convoy. The cues are used by the operator to find andidentify the activity of interest.

However, algorithms alone are not the answer. A semi-automated system does not necessarily providethe desired improvement in human performance. To do so, the algorithm must provide usable informationto the UAV video camera operator. To address this issue, we have created an integrated sensing anddecision support test-bed comprised of real-time interactive virtual world simulations, operator-in-the-loopexperimentation and a distributed data collection environment. Our results provide strong evidence thataccurate cueing can increase the effectiveness of UAV based reconnaissance. Our paper will address theexperimental framework as well as the tested concepts, algorithms and results.

II. Problem

The problem that we are addressing is decision support for counter insurgency. Counterinsurgencyoperations require persistent surveillance over a wide area, which can be accomplished using MTI data.However, we also need a sensor which can positively identify a potential threat. MTI data cannot providepositive identification, however, the Predator UAV video sensor is ideal for the job, because it can get inclose and the video data it produces is readily interpretable by human operators.

However, since the Predator UAV video sensor has such a narrow field of view, it cannot survey a largearea efficiently. Therefore, we want to take advantage of the complementary strengths of these two sensingmodalities by networking them together and using the MTI data to direct the attention of the UAV videocamera operator.

1 of 19

American Institute of Aeronautics and Astronautics

AIAA Infotech@Aerospace Conference <br>and <br>AIAA Unmanned...Unlimited Conference 6 - 9 April 2009, Seattle, Washington

AIAA 2009-1858

Copyright © 2009 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Figure 1. Decision Support for Counterinsurgency. We are investigating how to support cueing of narrowfield-of-view reconnaissance sensors by mining MTI surveillance data for suspicious activity, then providing acue to assist UAV video camera Operator.

III. Approach

The cueing algorithm is the key to coordinating the MTI and Predator UAV video sensors. The cueingalgorithm mines the MTI data for indications of suspicious activity (a convoy) and uses those indications ascues to direct the attention of the UAV video camera operator. For this experiment, we used a convoy detec-tion algorithm which was developed using MTI radar data collected during the Silent Hammer Experimentby the Lincoln Multi-Mission ISR Test-bed (LiMIT).

Our objective is to enable a predator UAV, with its identification capability, to efficiently survey a largearea. We are investigating how best to do this. In our experiment, we measure how well the predator cansurvey an area with the cueing tool as compared to without the cueing tool.

A. Cueing Concept Overview

Figure 2 shows a diagram of the MTI-Video cueing concept. The diagram shows the information flow througha cueing system. The diagram directly maps to our simulated environment.

The first component of our environment is ground vehicle simulation. We simulate scenarios of convoysand confuser vehicles. Next, we simulate a moving target indication radar. MTI is a radar processingtechnique which can quickly survey a wide area and detect moving vehicles. Then, the MTI data is sent tothe cueing algorithm. The cueing algorithm automatically analyzes the data and provides an indication ofactivity of interest. For example, if we are using a convoy detection algorithm, the cueing algorithm wouldindicate a suspected convoy. The red star in Figure 2 indicates a suspected convoy.

The indication, or cue, is then sent to the Predator UAV video camera operator. The operator selectsthe cue. The Predator video camera is then automatically slewed to the position of interest. The UAV videodata is streamed back to the sensor operator’s screen. The sensor operator then searches in the vicinity forthe activity of interest. The overall objective is to enable the predator to survey a large area effectively, eventhough the video sensor has a narrow field of view. In our experiment, we measured operator performancewith the cueing tool and without the cueing tool.

B. Simulation Environment

In order to realistically explore the above concept, we developed an integrated sensing and decision supporttest-bed called “Virtual Hammer”.1 Virtual Hammer is an operator-in-the-loop simulation environmentpremised on the recreation of the Silent Hammer Experimenta. The data from the Silent Hammer Experiment

aSilent Hammer was a sea trial / live fly experiment and Joint Exercise that took place during October 2004 on San ClementeIsland.

2 of 19

American Institute of Aeronautics and Astronautics

Figure 2. Cueing concept information flow diagram: (1) Ground Vehicle Simulation, (2) MTI Data Collection,(3) Send MTI Data to Cueing Algorithm, (4) Detect suspected convoy, (5) Send Cue to the UAV VideoOperator, (6) UAV video operator selects a cue and sends it to the Predator, (7) Predator UAV VideoCollection, (8) Send video to Operator’s Screen, (9) Operator searches and identifies the activity.

provides the baseline simulation parameters. Our environment includes simulated ground vehicle scenarios,MTI data, Predator UAV video data and an operator interface.

1. MTI Data

Moving Target Indication (MTI) is a radar mode, which detects moving ground vehicles. It puts a dot onthe ground for every vehicle that it detects. We call these detections “hits.” The data shown in Figure 3was collected by a Lincoln sensor called LiMIT. The LiMIT radar flies on a Boeing 707 called Paul Revere.Our simulation is based on this data product.

MTI can efficiently survey large areas and detect moving ground vehicles. However, it cannot identifythese vehicles with certainty because MTI only provides location and velocity information. Furthermore,MTI produces many false detections in some situations. For example, the data shown in Figure 3 has manyfalse alarms in the coastal regions because the LiMIT radar detects breaking ocean waves. Consequently,MTI radar can effectively monitor general vehicle activity; however, we need an additional sensor in orderto positively identify any suspicious activity.

2. Predator UAV Video Data

Our experimentation environment also includes simulated Predator UAV video. The Predator UAV, shownin Figure 4, has a video sensor attached to its underbelly.

Our simulated Predator video data product, shown in Figure 5 (a), is representative of video collectedby a Predator UAV. It was generated from a synthetic database of San Clemente Island. Figure 5 (b), showsa real world screen-shot of the same location collected during the Silent Hammer Experiment by an opticalsensor carried on a Pelican aircraft. We used the Pelican data in order to directly compare our simulateddata product.

In the experiment scenario, a Predator UAV will fly along a representative fixed flight path shown inFigure 6. The flight path takes into consideration the Predator UAV speed, altitude and field of regard inconjunction with the surveillance area.

3 of 19

American Institute of Aeronautics and Astronautics

Figure 3. MTI : moving target indication radar data

Figure 4. Predator platform and video sensor

4 of 19

American Institute of Aeronautics and Astronautics

(a) Simulated Predator UAV video (b) Real-world optical data

Figure 5.

Figure 6. Predator UAV fixed flight path (Altitude: 15,000 feet, Field of Regard: 45o)

3. Predator UAV Video Camera Operator Interface

Our test environment includes an operator interface to convey the decision support information and videostream to the Predator UAV operator. The operator will control the heading, pitch and zoom of the UAV’svideo camera using a joystick. The decision support tool will direct the operator where to search for thetargets (convoys).

An actual Predator UAV Sensor Operator Interface is shown in Figure 7. Our video camera operatorinterface has a situational awareness screen and a screen showing the video stream. Our simulated PredatorUAV Operator Interface is shown in Figure 8. This is what the video sensor operator sees during theexperiment. The situational awareness screen (top) shows the Predator track, the video sensor field of viewand the cue provided by our decision support tool. The lower screen shows the simulated Predator Videostream. Our experiment compares the performance of UAV operators with and without the decision aid(cue).

5 of 19

American Institute of Aeronautics and Astronautics

Figure 7. An actual Predator UAV Sensor Operator Interface (right)

Figure 8. Our simulated UAV sensor operator interface includes a UAV video display (left) and an MTI cueinginterface & situational awareness display (right).

Figure 9. Convoy Detection Algorithm Outline

6 of 19

American Institute of Aeronautics and Astronautics

C. Convoy Detection Algorithm

The decision support algorithm employed in our experiment is convoy detection. The objective of thistechnique is to find high priority targets. The original algorithm was developed using Silent Hammer datacollected on San Clemente Island; off the coast of California. The algorithm was retrained for the cueingexperiment, using simulated data. The training data was simulated independent of the actual experimentdata. In other words, the algorithm was trained on a different data set than that used in the actualexperiment. This is a standard procedure, designed to evaluate algorithm performance on new unseen data.

Convoy detection involves pre-filtering the data to remove false alarms and to probabilistically place thedetections onto the road network. We then identify convoy candidates using a clustering technique andapply a motion model to associate these convoy candidates through time. An outline of the convoy detectionalgorithm is shown in Figure 9. The details of the convoy detection algorithm are shown in Appendix A.

IV. Results

The overall operational capability we are testing is the effective use of Predator UAV based video tosurvey a wide area. The enabling technology for achieving this capability is an MTI based convoy detectionalgorithm. This algorithm provides cues to the Predator UAV video camera operator, indicating likelylocations to search for convoys.

In order to compare our cueing system against a cue-less system, we conducted a small pilot studyconsisting of operator-in-the-loop experiments on three subjects. Each subject executed a convoy search andidentify task described below. We measured the performance of each subject in terms of task completiontime. Our data analysis, shown below, demonstrates that our system was significantly better than thecue-less system in terms of statistical significance and practical significance.

A. Task

Each subject was instructed to fly a 15-minute mission. During the mission, the subjects searched for up to5 convoys. The actual ground vehicle simulations included exactly 3 convoys. Subjects sometimes identifythe same convoy more than once. Therefore, the subjects were instructed to search for up to 5 convoys, inorder to keep them motivated to find them all.

For the purpose of this experiment, a convoy is defined as a group of 3 or more vehicles, each within 150meters of each other, traveling together. When a convoy is found, the operator would report the number ofvehicles in the convoy and snap a still photograph of the convoy, using the simulated camera interface.

The task was designed to be relevant to a military surveillance mission and have a clear goal. For example,the task of identifying a convoy is fairly clear, based on our convoy definition. Other possible tasks, such asidentifying enemy combatants, are much less clear and require much more situation specific knowledge.

B. Experiments

We conducted the operator-in-the-loop experiments on 3 adult male subjects with no prior Predator expe-rience. Each subject was given equal training with the simulated Predator video camera interface using ourtraining booklet and any requested guidance from our designated trainer. The subjects were also provided aninstruction poster which briefly identifies the interface controls. Each subject was tested on twelve missions,each lasting 15 minutes. The missions were equally divided between 3 decision support tools. The order wasrandomized.

C. Decision Support Tool - Independent Variable

The independent variable that we are comparing is the choice of decision support tool. We developed aconvoy detection cueing algorithm (described above) for the convoy search and identification task. Thisalgorithm provides a cue to the operator indicating the location of suspected convoys. Each subject usedthis tool on four missions. Each subject also executed 4 missions without the cueing tool. Finally, eachsubject executed 4 missions using a synthetic cue generated directly from the ground truth.

7 of 19

American Institute of Aeronautics and Astronautics

• Decision Support Tools:

– No-cue

– Convoy Detection Algorithm-cue

– Synthetic-cue

The convoy detection cueing algorithm is the tool which we are evaluating. We compared operatorperformance using this tool, to their performance without the cue (no cue). Testing the subjects under bothconditions allows us to evaluate the utility of the convoy detection cueing algorithm.

We also tested the subjects using a synthetic cue, in order to gauge how well a cueing system could workif it had perfect knowledge of all vehicle locations. Cueing algorithms based on real MTI data are imperfectdue to many factors including MTI data fidelity and the amount of background traffic. The synthetic cue,based on the actual vehicle locations, was used in order to measure operator performance using the bestpossible (perfect) cue. In this way, we could test the usefulness of our cueing concept independent of theactual algorithm effectiveness. If convoy cueing is not useful at the highest level of accuracy, then it will notbe useful at any level.

D. Performance Metric (Dependent Variable)

A dependent variable is the variable used to compare our three experimental setups. The dependent variablewhich we are measuring is task completion time; the time it takes to complete a convoy search and identi-fication task. The experiment outcome will be measured in terms of task completion time. An automatedcomputer script uses the ground truth to determine the task completion time. The results of each experimentrun are output as a Report Card. An example Report Card is shown in Appendix B.

E. Statistics

Our experiment measured the time it takes a video camera operator of a simulated Predator UAV reconnais-sance aircraft to locate and image a set of convoys. This section compares the task completion time underour three experimental setups:

• No-cue

• Convoy Detection Algorithm-cue

• Synthetic-cue

A graph of the data for the three experimental setups is shown in Figure 10. The data analysis of eachexperiment run was limited to 720 seconds. Completion times of 720 seconds are recorded when an operatorfails to find all three convoys in the allotted time. Figure 10 contains task completion time histograms forthe three different experiment setups. The histogram data is displayed as colored vertical bars. As you cansee from the histograms, the no-cue completion times (Figure 10 (a)) are longer and much more spread outthan the Cueing completion times (Figures 10 (b) and 10 (c)).

We computed the mean and standard deviation of the task completion times of each setup (no-cue,algorithm-cue and synthetic-cue). We then constructed and plotted a normal distribution for each data setusing these parameters. This normal distribution is displayed as a black line. The black arrows indicateone standard deviation above and below the respective means. As you can see, from the graphs, the no-cuecompletion time distribution (Figure 10 (a)) is clearly higher than the Cueing completion time distributions(Figures 10 (b) and 10 (c)). It is also worthy to note that seven of the no-cue trials exceeded 500 seconds(Figure 10 (a) ), while only two algorithmic cue trials took as long.

The data from Figure 10 is summarized for comparison in Figure 11, which shows completion time boxplots for each setup. The blue box reflects the no-cue data, the red box reflects the algorithm-cue dataand the yellow box reflects the synthetic-cue data. The vertical line within each colored box depicts themedian of the measured completion time data. Each colored box encloses the second and third quartilesof the measured completion time data. The black arrows reflect one standard deviation above an belowthe respective statistical means the data sets. Using this comparison graph, it is clear that the operatorsperformed much better with the cueing tools than without them.

8 of 19

American Institute of Aeronautics and Astronautics

(a) No-cue (9 minutes) (b) Algorithm cueing (4 1/2 minutes) (c) Synthetic cueing (3 1/2 minutes)

Figure 10. Average task completion time comparison

Figure 11. Box plot cue type comparison

To make this more precise, we performed a Student’s T-testb to determine if the observed improvementin average completion time was statistically significant. Our results are very good. Table 1 shows the resultsof our T-test comparing the no-cue data to the algorithm-cue data. For our T-test, the null hypothesis isthat the underlying means of the no-cue and algorithm-cue completion times are the same. We perform aT-test to determine if this (null hypothesis) is true. Table 1 shows the very low probability (.0091) obtainedby our T-test. This number reflects the probability that the observed difference arose by chance. A result of.05 or lower reflects statistical significance. Our number demonstrates very strong statistical significance. Inother words, the completion time means are so different, that it is extremely unlikely that the difference arosefrom chance. The conclusion is that our cueing algorithm clearly makes a statistically significant difference.

Statistical significance, in and of itself, is not important unless the results are of practical significance.Statistical significance shows that there is a difference. However, practical significance means that thedifference is large enough to be important to operators in the field.

Table 2 shows the mean and standard deviation of the three trial groups. You can see that the meantask completion for the no-cue setup is approximately twice as long as the task completion time for thealgorithm-cue setup. Thus, using our cueing algorithm, we were able to double operator efficiency. Our re-sults provide strong evidence that accurate cueing can increase the effectiveness of UAV based reconnaissance.

bWe used the Behrens-Fisher version of the Student’s T test to allow for unequal variances and Satterthwaite’s approximationto determine the effective number of degrees of freedom.

9 of 19

American Institute of Aeronautics and Astronautics

No cue vs.algorithm cue

Probability 0.0091T Statistic 2.87Degrees of Freedom 20.76

Table 1. T-test result : no cue versus algorithm cue

For completeness, we also compared our algorithm-cue to the synthetic-cue. The mean task completiontime for the Algorithm cue is somewhat longer than the task completion time for the Synthetic cue. How-ever, a T-test comparing these showed no statistical difference. In other words, the synthetic-cue did notperform any better than the algorithm-cue, in the experiment environment. Therefore, our results indicatelittle room for improvement on this task. The maximum improvement which can be gained depends onthe implementation of the cueing system display to effectively convey likely target locations as well as theenvironment under surveillance.

No-cue Algorithm-cue Synthetic-cueMean 530 sec 279 sec 214 secStandard Deviation 210 sec 207 sec 103 sec

Table 2. This table shows the mean and standard deviation values for the three trial groups. On average, thealgorithm assisted operators completed their tasks twice as fast as the no-cue operators.

V. Discussion

Our test environment and scenario was reviewed by an Air Force pilot with 7 1/2 years experience flyingF-15s and 2 1/2 years of experience flying Predator surveillance missions in Iraq. In his opinion, our simulationenvironment was a faithful replica of an actual Predator operator station with the exception of the cameratrim functionality. He also stated that the ground activity was realistic with only minor idiosyncrasies. Hestated that the convoy task was very appropriate and of sufficient difficulty. He noted that actual searchtasks take much longer to complete. For example, in our scenario, an operator will typically search forand find 3 convoys in 15 minutes. In a real-world task, an operator will typically search for one convoyfor three hours and find nothing. With that said, his opinion was that a real world scenario would provideextremely sparse data and be prohibitively time consuming to execute. Therefore, a shorter task in a targetrich environment like ours is much more appropriate for testing a decision support tool.

VI. Conclusion

We developed a real-time cueing and operator assistance tool and demonstrated clear improvement inoperator performance. We conducted a pilot study consisting of operator-in-the-loop experiments comparingour cueing system against a cue-less system. Our data analysis demonstrates that our system is significantlybetter than the cue-less system in terms of statistical significance and practical significance. The statisticaltests showed unambiguous results with very high significance. The practical significance is also very high.Specifically, our cueing system cut task completion time in half. Our results provide strong evidence thataccurate cueing can increase the effectiveness of UAV based video reconnaissance.

Acknowledgments

The authors would like to thank James McGrew (Air Force Pilot) for reviewing our interface and exper-iment setup. The experiment discussed was made possible by the work of several researchers, engineers and

10 of 19

American Institute of Aeronautics and Astronautics

programmers, specifically: Jeff Allen, Yican Cao, Gary Condon, Elizabeth Johnson, William Ledder, PaulMetzger, Paula Pomianowski, Timothy Schreiner, Tod Shannon, Keith Sisterson, Nagabushan Sivananjaiah,Rajesh Viswanathan and Andrew Wang.

References

1Paul Metzger, Lawrence Bush, Peter Mastromarino, Stephen Rak, and Todd Shannon. Virtual hammer. Lincoln LaboratoryJournal on Integrated Sensing and Decision Support, 2007.

2Paula Pomianowski, Richard Delanoy, Jonathan Kurz, and Gary Condon. Silent hammer. Lincoln Laboratory Journal onIntegrated Sensing and Decision Support, 2007.

This work was sponsored by the U.S. Government under Air Force Contract FA 8721-05-C-0002. Opin-ions, interpretations, conclusions, and recommendations are those of the authors and are not necessarilyendorsed by the United States Government.

11 of 19

American Institute of Aeronautics and Astronautics

Appendix A : Convoy Detection Algorithm

Figure 12. Convoy Detection Algorithm Outline

The decision support algorithm employed in our experiment is convoy detection. The objective of thistechnique is to find high priority targets. The original algorithm was developed using Silent Hammer datacollected on San Clemente Island, which is off the coast of California. The algorithm was retrained forthe cueing experiment, using simulated data. The training data was simulated independent of the actualexperiment data. In other words, the algorithm was trained on a different data set than that used in theactual experiment. This is a standard procedure, designed to evaluate algorithm performance on new unseendata.

Convoy detection involves pre-filtering the data to remove false alarms and to probabilistically place thedetections onto the road network. We then identify convoy candidates using a clustering technique andapply a motion model to associate these convoy candidates through time. An outline of the convoy detectionalgorithm is shown in Figure 12.

Pre-filtering

The first filtering technique, false alarm filtering, is shown in Figure 13. The Silent Hammer MTI data setcontains many false alarms, including breaking ocean wave hits. These wave hits tend to confuse the convoydetection process. Therefore, we developed a technique for filtering out wave hits.

The data shown is color-coded by type. Specifically, the blue dots are wave hits and the yellow dots arevehicles on the road. The green and red dots are other land and sea detections. Our objective is to suppresswave hits and keep road hits. Using the labeled road and wave data, we trained a pattern recognition engineto discriminate between road hits and wave hits.

In addition to position information, MTI data contains line of site velocity, signal to noise ratio and rootmean squared cross range error information. Using these features, we trained a Support Vector Machine(SVM) pattern recognition engine to discriminate between road hits and wave hits. This worked very wellat discriminating between road and wave detections. Specifically, cross validation error on the training setwas 9%. We then tested the classifier on a held-out data set, which resulted in 6% error.

We then apply a second filtering technique called PDF (Probability Density Function) road filtering.This is a probabilistically correct method for placing the detections in the most likely location, on the road.This involves construction the PDF of an individual detection from the range and cross-range error, thencombining that PDF with a prior PDF over the road network.

Specifically, we constructed a prior PDF over the road network. This prior is represented by a highresolution grid-map of the area. This prior assumes that all vehicles will be located on the road network.Our road network was constructed using historical ground truth GPS data. Consequently, our road networkrepresents a faithful model of where vehicles actually travel.

We then collected MTI data. MTI data has a large cross-range error. To represent this location error,we reconstruct the error ellipse of each detection. This error ellipse is also represented by a high resolutiongrid-map of the area. We then multiply the prior PDF by our detection PDF. The grid cell with the highestposterior probability represents the most likely location of our target. This method is a probabilisticallycorrect method for estimating vehicle locations.

Convoy Candidate Identification

Once we have filtered the data, we then identify convoy candidates, shown in Figure 15. We start with thefiltered MTI data shown in frame 1. We then remove outlying detections. These are detections that are too

12 of 19

American Institute of Aeronautics and Astronautics

Figure 13. False Alarm Filtering: LiMIT sensor data, collected at the Silent Hammer experiment on SanClemente Island California, was hand labeled based on type (land, sea, wave, road vehicle). Using the handlabeled data, we trained a support vector machine pattern recognition engine to recognize road and wave hits.

Figure 14. PDF Road Filtering : We combine the vehicle location probability density function (range andcross range error) with the prior road map to place road vehicles in the most likely road location. Ourprobabilistically correct method improves vehicle location estimation.

13 of 19

American Institute of Aeronautics and Astronautics

far away from any other detection, such that they or very unlikely to be part of a convoy. We remove them,so that they do not confuse the clustering algorithm. The result of this step is shown in frame 2.

Next we apply a generic (K-means) clustering algorithm to the data. The results of this step are shownin frame 3. This is a generic clustering algorithm which does not do a very good job of finding convoycandidates. This is were the real work comes in. We developed an iterative clustering technique whichsubdivides, re-combines and removes certain clusters. We also remove so called outliers from clusters basedon the Mahalanobis distance. The results of this iterative technique are shown in frame 4, which is ourconvoy candidate.

Figure 15. Convoy Candidate identification

Motion Model

Now that we have our convoy candidates we can apply a motion model to associate these candidates throughtime; from scan to scan of the MTI data. To construct our motion model, we first fit a line to a clusterusing least squares regression. We then use the line of sight velocity of each individual detection in a givencluster to compute the overall line of sight velocity for that cluster. We then project this overall line of sightvelocity onto the line. This provides us with our heading and heading velocity.

Now that we have our motion model, we can then use this model to associate the clusters through time, orfrom scan to scan of the MTI data. For each MTI scan, we cluster and apply our motion model to associatethe clusters from scan to scan. If a given cluster is able to be associated through a certain number of scansof MTI data, we then determine that cluster to be a convoy.

This is essentially our cue. This cue is subsequently used to direct the attention of the UAV video operatorwho is searching for convoys. A summary of the convoy detection process which we have just covered isshown in Figure 16.

14 of 19

American Institute of Aeronautics and Astronautics

Figure 16. Motion model

Figure 17. Convoy detection algorithm summary

15 of 19

American Institute of Aeronautics and Astronautics

Appendix B: Sample Experiment Report Card

The following is an example report card generated from a single cueing experiment run. A report cardshows the results of the run. The algorithm based cue was used during the experiment run, shown below.

The report card includes the time at which each convoy was found, verified by the EO image taken bythe operator. Each correct find was computed using the actual camera field of view and simulated vehiclelocations. This information is included in the report card with each EO image. It is conveyed using a mapoverlaid with the camera field of view and convoy vehicle locations. The confuser vehicles are not shown onthe map display. Using this information, we determine which convoy (if any) was found.

Operator Report Card

Trial: 73

Operator: Subject 3

Scenario: 32

Cue Type: algorithmic cue

Convoy 1: [16, 17, 18, 19, 20]

Convoy 2: [2, 3, 4, 5, 6]

Convoy 3: [7, 8, 9, 10, 11, 12, 13, 14, 15]

First Convoy Time: 13 seconds

Convoys found [3]

Second Convoy Time: 101 seconds

Convoys found [2, 3]

Third Convoy Time: 157 seconds

Convoys found [1, 2, 3]

Completion Time: 157 seconds

Date: 2006-08-18 10:45:28.00

16 of 19

American Institute of Aeronautics and Astronautics

17 of 19

American Institute of Aeronautics and Astronautics

18 of 19

American Institute of Aeronautics and Astronautics

19 of 19

American Institute of Aeronautics and Astronautics


Recommended