+ All Categories
Home > Documents > [American Institute of Aeronautics and Astronautics AIAA Guidance, Navigation and Control Conference...

[American Institute of Aeronautics and Astronautics AIAA Guidance, Navigation and Control Conference...

Date post: 10-Dec-2016
Category:
Upload: bonnie
View: 212 times
Download: 0 times
Share this document with a friend
23
American Institute of Aeronautics and Astronautics 1 !"#"$%&’(#)* ,(-".(/"$% 0$1 2"10")3* 4510(6) 78)1(/"$% Eric W. Frew * and Tristan Gerritsen !"#$%&’#() +, -+.+&/0+1 2+3.0%&1 -41 56768 Stephen Pledgie A , Chris Brinton E , and Shivang Patel ** 9+’/#: ;<91 56= >):+.#" ?+/01 >@A@1 B%%’C3&D1 E;1 86=FG Bonnie Schwartz †† ;#& H+&:% ?%’%/&:I B/C+&/(+&)1 J&#DI( K/((%&’+" ;#& H+&:% 2/’%1 4L 9:"# 8(8)1 81)#)%/# ( -"#"$%&;(#)* %(-".(/"$% #$35/"$% 0$1 5%<(%%)* ("161(0/ $8)1(/"$%# $% ("10")3* #510(6)# "% =>4&*)%")* )%-"1$%<)%/#? 9:) @%<(%%)* 2"161(0/ 4A#/)< =1$5%* 78)1(/"$%# B(%(.)<)%/ 4A#/)< C@=7B4D *)#61";)* :)1) 6$<;"%)# <)(#51)<)%/# 01$< ( 6$<85/)1 -"#"$% #A#/)< (%* "%)1/"(3 #)%#$1# E"/: (% ("18$1/ 3(A$5/ *(/(;(#) /$ 81$-"*) (6651(/) 1)(3&/"<) 8$#"/"$% *)/)1<"%(/"$% $% /:) ("10")3* #510(6)? @=7B4 81$-"*)# ;$/: (;#$35/) 8$#"/"$% $0 /:) ("161(0/ (# E)33 (# 1)3(/"-) 8$#"/"$% /$ ("18$1/ #510(6) )3)<)%/# #56: (# 15%E(A :$3* 3"%)# (%* /(F"E(A )*.)#? 9:) G)A /)6:%"6(3 6$<8$%)%/# $0 @=7B4 (1) 6$<85/)1 -"#"$% (3.$1"/:<# /:(/ 63(##"0A "<(.) 1)."$%# (%* B(1G$- 3$6(3"H(/"$% 5#"%. 8(1/"63) 0"3/)1#? 2% $-)1-")E $0 /:) $-)1(33 @=7B4 (16:"/)6/51) "# 81)#)%/)* (# E)33 (# *)/("3# $0 /:) 6$<85/)1 -"#"$% (%* B(1G$- 3$6(3"H(/"$% (3.$1"/:<#? >1)3"<"%(1A 1)#53/# (1) 1)8$1/)* /$ :".:3".:/ /:) 8)10$1<(%6) 6(8(;"3"/")# $0 /:) #A#/)<? I? I%/1$*56/"$% ODAJ’S aircraft operations on the airport surface, including taxiing, airfield navigation, avoidance of runway incursions, and obstacle avoidance, all require significant human involvement and decision-making. Whereas even personal general aviation aircraft are often equipped with avionics and auto-pilots that allow fully automated, coupled approaches, the navigation and control of the aircraft on the ground remains a highly manual operation. For the operation of Unmanned Aircraft Systems (UAS), such as the GlobalHawk, significant effort is required to survey each airfield at which the UAS will operate, and to specify detailed taxi path waypoints, which the UAS tracks using GPS guidance. 1 A number of disadvantages exist in this approach, including the significant effort required to survey airfields, the manual human oversight required to monitor the operation of the UAS on the airfield and to avoid obstacles and other aircraft, and the reliance on GPS. This paper describes the UAS Ground Operations Management System (UGOMS) which provides accurate, real-time position determination on the airfield through the fusion of multiple sensor inputs already available on the aircraft (specifically measurements from a computer vision system and inertial navigation sensors), without reliance on external navigation systems, and without detailed / M&#+&# surveying of the airfield. Further, UGOMS will support higher-level automated decision-making during airport surface operations to identify airport markings and signs, receive Air Traffic Control (ATC) instructions, and detect obstacles and other aircraft to ensure the safe and efficient operation of the aircraft on the airfield. As shown in Figure 1, the UGOMS system will provide information * Assistant Professor, Aerospace Engineering Sciences, 429 UCB, [email protected], AIAA Member Research Assistant, Aerospace Engineering Sciences, 429 UCB, [email protected], Student Member A Director, Advanced Autonomy \ Applied Cybernetics, [email protected], AIAA Member E President \ Principal Analyst, [email protected], AIAA Member ** Software Engineer, [email protected], AIAA Member †† Computer Engineer, [email protected] T AIAA Guidance, Navigation and Control Conference and Exhibit 18 - 21 August 2008, Honolulu, Hawaii AIAA 2008-7320 Copyright © 2008 by Eric W. Frew. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
Transcript

American Institute of Aeronautics and Astronautics

1

!"#"$%&'(#)*+,(-".(/"$%+0$1+2"10")3*+4510(6)+78)1(/"$%+

Eric W. Frew* and Tristan Gerritsen† !"#$%&'#() +, -+.+&/0+1 2+3.0%&1 -41 56768

Stephen PledgieA, Chris BrintonE, and Shivang Patel** 9+'/#: ;<91 56= >):+.#" ?+/01 >@A@1 B%%'C3&D1 E;1 86=FG

Bonnie Schwartz†† ;#& H+&:% ?%'%/&:I B/C+&/(+&)1 J&#DI( K/((%&'+" ;#& H+&:% 2/'%1 4L

9:"#+8(8)1+81)#)%/#+(+-"#"$%&;(#)*+%(-".(/"$%+#$35/"$%+0$1+5%<(%%)*+("161(0/+$8)1(/"$%#+$%+("10")3*+#510(6)#+ "%+=>4&*)%")*+)%-"1$%<)%/#?+9:)+@%<(%%)*+2"161(0/+4A#/)<+=1$5%*+78)1(/"$%#+B(%(.)<)%/+4A#/)<+C@=7B4D+*)#61";)*+:)1)+6$<;"%)#+<)(#51)<)%/#+01$<+(+6$<85/)1+ -"#"$%+ #A#/)<+ (%*+ "%)1/"(3+ #)%#$1#+ E"/:+ (%+ ("18$1/+ 3(A$5/+ *(/(;(#)+ /$+ 81$-"*)+(6651(/)+ 1)(3&/"<)+ 8$#"/"$%+ *)/)1<"%(/"$%+ $%+ /:)+ ("10")3*+ #510(6)?+ @=7B4+ 81$-"*)#+ ;$/:+(;#$35/)+8$#"/"$%+$0+/:)+("161(0/+(#+E)33+(#+1)3(/"-)+8$#"/"$%+/$+("18$1/+#510(6)+)3)<)%/#+#56:+(#+ 15%E(A+ :$3*+ 3"%)#+ (%*+ /(F"E(A+ )*.)#?+ 9:)+ G)A+ /)6:%"6(3+ 6$<8$%)%/#+ $0+ @=7B4+ (1)+6$<85/)1+ -"#"$%+ (3.$1"/:<#+ /:(/+ 63(##"0A+ "<(.)+ 1)."$%#+ (%*+ B(1G$-+ 3$6(3"H(/"$%+ 5#"%.+8(1/"63)+ 0"3/)1#?+ 2%+ $-)1-")E+ $0+ /:)+ $-)1(33+ @=7B4+ (16:"/)6/51)+ "#+ 81)#)%/)*+ (#+ E)33+ (#+*)/("3#+$0+ /:)+6$<85/)1+-"#"$%+(%*+B(1G$-+3$6(3"H(/"$%+(3.$1"/:<#?+>1)3"<"%(1A+1)#53/#+(1)+1)8$1/)*+/$+:".:3".:/+/:)+8)10$1<(%6)+6(8(;"3"/")#+$0+/:)+#A#/)<?++

I? I%/1$*56/"$%

ODAJ’S aircraft operations on the airport surface, including taxiing, airfield navigation, avoidance of runway incursions, and obstacle avoidance, all require significant human involvement and decision-making. Whereas

even personal general aviation aircraft are often equipped with avionics and auto-pilots that allow fully automated, coupled approaches, the navigation and control of the aircraft on the ground remains a highly manual operation. For the operation of Unmanned Aircraft Systems (UAS), such as the GlobalHawk, significant effort is required to survey each airfield at which the UAS will operate, and to specify detailed taxi path waypoints, which the UAS tracks using GPS guidance.1 A number of disadvantages exist in this approach, including the significant effort required to survey airfields, the manual human oversight required to monitor the operation of the UAS on the airfield and to avoid obstacles and other aircraft, and the reliance on GPS.

This paper describes the UAS Ground Operations Management System (UGOMS) which provides accurate, real-time position determination on the airfield through the fusion of multiple sensor inputs already available on the aircraft (specifically measurements from a computer vision system and inertial navigation sensors), without reliance on external navigation systems, and without detailed / M&#+&# surveying of the airfield. Further, UGOMS will support higher-level automated decision-making during airport surface operations to identify airport markings and signs, receive Air Traffic Control (ATC) instructions, and detect obstacles and other aircraft to ensure the safe and efficient operation of the aircraft on the airfield. As shown in Figure 1, the UGOMS system will provide information * Assistant Professor, Aerospace Engineering Sciences, 429 UCB, [email protected], AIAA Member † Research Assistant, Aerospace Engineering Sciences, 429 UCB, [email protected], Student Member A Director, Advanced Autonomy \ Applied Cybernetics, [email protected], AIAA Member E President \ Principal Analyst, [email protected], AIAA Member ** Software Engineer, [email protected], AIAA Member †† Computer Engineer, [email protected]

T

AIAA Guidance, Navigation and Control Conference and Exhibit18 - 21 August 2008, Honolulu, Hawaii

AIAA 2008-7320

Copyright © 2008 by Eric W. Frew. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

American Institute of Aeronautics and Astronautics

2

to the UAS and to the remote pilot regarding the position of the Unmanned Aircraft (UA) on the airfield, both in terms of accurate absolute position (latitude and longitude) as well as detailed relative measurements to the edge of the taxiway, runway hold line, or other airport surface elements. A key driving requirement for the system presented here is the ability to operate without the Global Positioning System (GPS). In order to satisfy this requirement, our approach assumes the availability of an airfield layout database that is generated through the use of aerial photography.

The UGOMS system will significantly improve the utility of UAS by reducing the workload of the remote

Unmanned Aircraft pilot and by easing integration into the National Airspace System (NAS). The current operational nature of airfield movement is made up of highly manual processes that require significant attention from the human pilot. By allowing the UA to have a greater degree of autonomy – including performing some simple tasks completely autonomously, and providing proactive decision support and situational awareness in other situations – remote pilots can be tasked to handle multiple aircraft at once.2 The navigation component of UGOMS enables a higher level Intelligent Control layer to taxi multiple aircraft in formation, with a single remote pilot controlling the lead vehicle. The UGOMS concept will also significantly enhance the ability of unmanned aircraft to be integrated into routine operations in the NAS, and to provide _Same Base, Same Time, Same Tempo’ operations. UGOMS will provide see and avoid capabilities for unmanned aircraft when operating on the airport surface. Further, UGOMS will allow the overall UAS to utilize the same information that is used by human pilots today to plan and execute the portion of the mission on the airport surface. For example, using UGOMS, the airport database could be generated from aerial photography – or from standard airport diagram charts that are used by human pilots (as long as they are accurate in scale).

A vision-based navigation solution is motivated by the requirement for operation in GPS-denied environments. The navigation component of UGOMS uses only passive sensors that do not rely on external signals (other than natural light). Inertial sensors provide measurements of vehicle motion but suffer from inherent dead-reckoning drift errors. Even with high-performance inertial measurement units (IMUs) this error will eventually become significant. Furthermore, inertial data cannot be used to identify runway surface elements. Thus, a computer vision system provides complementary data about the external environment. This data can be used to aid the INS navigation solution (i.e. help correct for the dead-reckoning drift) as well as detect and identify runway features and other obstacles, improving safety and relative navigation performance.

Navigation on an airfield surface is similar to the problems faced by unmanned ground vehicles (e.g. those posed by the DARPA Grand Challenges3) while in a slightly more structured environment. Thus, solutions developed for the Grand Challenges (and more generally unmanned ground vehicles) could be applied to the problem considered here. Unfortunately, several design constraints limit the applicability of those solutions. Foremost is the limited

!"#

!$%&'

!"#

!$%&'

!"#

!$%&'

With UGOMS, the UA can navigate without GPS reference

UGOMS data is integrated into the remote pilot station

J".51)+ K+ 9:)+ @=7B4+ #A#/)<+ 81$-"*)#+ (5/$%$<$5#+ %(-".(/"$%+ (%*#"/5(/"$%(3+(E(1)%)##+"%0$1<(/"$%+/$+/:)+1)<$/)+8"3$/?+

American Institute of Aeronautics and Astronautics

3

payload capacity of most UA compared to the ground vehicles used in the Grand Challenges. The entire payload capacity of vehicle systems that competed in the Grand Challenges was devoted to the sensing, perception, and automation of the vehicle. In contrast, UGOMS is only one component of an aircraft that is generally optimized for some set of mission requirements, with ground operations as a relatively small portion of the entire operation. The Grand Challenges vehicles carried suites of sensors such as stereo cameras and laser range finders.4,5 The size, weight, and power requirements of this number of sensors make them prohibitive for UGOMS.

Due to the low cost, low power, and information-rich nature of video systems, vision-based navigation solutions have become increasingly popular for unmanned vehicles in general6 and unmanned aircraft in particular.7-11 A common approach uses vision measurements to aid an inertial solution6,7 by performing an optimal Kalman filter-like update. The main advantage of such an approach is that the full non-linear motion calculation does not have to be performed. A second important advantage of such an approach is (optimal) estimation of the accelerometer and rate gyro drift in the IMU. Calculation of these terms improves dead reckoning performance, which would be most useful when image measurements are difficult to obtain (e.g. when the camera is pointed directly at the sun or otherwise obstructed).

One popular category of vision-based algorithms follows the simultaneous localization and mapping (SLAM) paradigm.7-11 In these methods, the positions of the static objects represented by the image features are included in the system state in addition to the camera pose. This state augmentation is performed in order to maintain the optimality of the estimation process by including all cross-correlations in the filter. SLAM formulations are particularly popular when the location of objects is also critical, e.g. for collision avoidance. The main drawback of these approaches is the need to maintain states for objects, leading to a large computational burden on the system as more and more objects are viewed. Alternative approaches exist that only estimate the pose of the camera in order to ease computational cost and achieve real-time performance. These approaches express the measurement model as a set of geometric constraints imposed on the features in a sequence of two or more images.6 By evaluating the constraint equation over the set of all features, a pseudo-measurement is created that serves as the innovation of a Kalman filter-like update step.

Access to a map of the environment greatly simplifies navigation since a full SLAM algorithm is not needed. However, registration between sensor measurements and the map becomes a dominant factor in the performance of vision-based navigation. The airfield layout database provided to the UGOMS navigation layer does not include sufficient detail to enable visual matching techniques.12 Instead, computer vision measurements of identifiable runway surface elements (e.g. taxiway boundaries that can be determined in both the layout and from imagery) are transformed into body-fixed bearing and range measurements based on known camera geometries. These measurements are then considered similarly to sonar measurements often used by indoor mobile robots.13,14 The UGOMS navigation architecture uses Monte Carlo localization (also called Markov localization) to determine the pose (position and orientation) of the vehicle on a known airfield map.15,16, 23 The inputs to the Markov Localization process include estimates of vehicle pose obtained from an inertial navigation system and measured distances to well-defined edges on the map.

The Monte Carlo localization method uses a particle filter to fuse inertial and computer vision measurements while mitigating nonlinearities and other effects that are not easily handled by Kalman Filter-based algorithms.15,16 Traditional Kalman Filter-based techniques represent the probability density function of the vehicle pose estimate by the mean and covariance matrix only. For Gaussian random variables these two terms fully represent the probability density function and for linear systems the function stays Gaussian as it evolves. Unfortunately vehicle dynamics and computer vision measurements violate the linear system and Gaussian noise assumptions. The Extended Kalman Filter (EKF) can be applied in some cases, but often performs poorly for simple conditions and cannot support bimodal probability distributions. On the other hand, particle filters create an estimate of the entire probability distribution function by maintaining track of a finite set of particles that represent the probability density function. By tracking the probability of poses of the vehicle, the filter is more robust than a traditional Kalman filter when

American Institute of Aeronautics and Astronautics

4

handling nonlinearities. The particle filter is also able to recover from large disturbance errors (referred to as the kidnapped robot problem), for example from incorrect matching within the computer vision system. While a Kalman Filter-based algorithm would diverge due to large linearization errors, the particle filter is able to assimilate data that contradicts the previous estimate and return to the correct estimate.15, 16, 23

The remainder of the paper describes the vision-based navigation components of the UGOMS architecture. Section II provides an overview of the UGOMS navigation architecture. The computer vision algorithms used to detect and classify surface area elements are described in Section III. The particle filter algorithms used to fuse inertial and vision measurements are presented in Section IV. Finally, conclusions are given in Section V.

II? 9:)+@24+=1$5%*+78)1(/"$%+B(%(.)<)%/+4A#/)<+C@=7B4D+

The UAS Ground Operations Management System is designed to provide the UAS and human operator with robust active perception of the airfield environment, facilitating autonomous guidance and navigation during ground-based operations. The main requirements of the UGOMS system are:

! >1)6"#)+L)/)1<"%(/"$%+$0+2"10")3*+>$#"/"$%M The most fundamental technical objective of this effort is to design a system that can provide autonomous, accurate and reliable position data for the UA without requiring external navigation signals or systems. Key criteria for this objective include the accuracy and reliability of positioning, as well as the complexity of integration of required hardware components.

! I*)%/"0"6(/"$%+(%*+N)#$35/"$%+$0+>$/)%/"(3+O$%03"6/#M Because the UA will operate with a greater level of autonomy on the airport surface, it is essential that potential conflicts be detected and resolved without relying on the immediate response of the remote operator. Therefore the system must be able to identify airfield markings and signs, as well as other aircraft, vehicles and obstacles.

! 2"10")3*+,(-".(/"$%+=5"*(%6)M To provide enhanced utility and efficiency of operation of multiple UAs on the airport surface, another technical objective is to provide a fully autonomous navigation and control function for movement of the UA on the airfield. The system should be able to support path planning at both the strategic level (i.e., parking spot to runway) and tactical level (i.e., deviating off centerline to avoid obstacles, while remaining on the taxiway). This component of UGOMS is not discussed here.

To date, UGOMS has focused on daylight operations for which electro-optical sensing and machine vision technology can provide taxiway surface detection, taxiway line and marker detection, marker recognition, looming obstacle detection, and video mensuration of UAS ground distance to taxiway edges. Information extracted from the vision system feeds airfield localization and path planning subsystems. Just as with conflict detection and warning systems for piloted operations, UGOMS must recognize all situations that present a significant conflict potential, while minimizing the occurrence of false alerts. In the longer term, UGOMS will provide instructions to the UAS control system(s) to execute appropriate conflict resolution maneuvers. Through further development, UGOMS capabilities will be extended to low-light conditions encountered during severe weather and nighttime operations. Delivering robust performance across all operating conditions will require UGOMS to expand its sensor suite by incorporating infrared thermal imaging and/or active range sensing.

The following assumptions are made in regard to the operational context of the UGOMS system: 1. We assume that there is a remote pilot, who is _on the loop,’ which means that the remote pilot

maintains sufficient awareness of the operational state of the UA that he or she can provide further mission direction when necessary.

2. The approach used for staffing remote pilot positions may include a single pilot having responsibility for multiple vehicles, or, more generally, the number of UAs may exceed the number of remote pilots who have responsibility for operating them.

3. The mission objective for UGOMS is to support airport surface navigation when GPS may not be present. Thus we will design assuming GPS is not available, but UGOMS will use GPS if it is available.

American Institute of Aeronautics and Astronautics

5

4. We assume that accurate GPS survey of the airfield is not available. Thus, even if GPS is available and used in UGOMS, we assume that UGOMS will not have adequate coordinates of the taxiway edges to navigate using GPS guidance alone.

One of UGOMS’ principal requirements is operation during denial of GPS and jamming of external navigation systems. Further, minimal to zero airfield preparation is available to support UGOMS, preventing use of emplaced RF transponders or, for instance, high-intensity infrared / visible LEDs, to aid the UAS in determining the location of surface boundaries and stop lines. Even if GPS is available and used in UGOMS, we assume that UGOMS will not have adequate coordinates of the taxiway edges to navigate using GPS guidance alone. In responding to this challenge, UGOMS has been constructed as a self-contained system requiring only an airfield map and statistical information regarding taxiway appearance, both of which can be obtained either from suitable overhead assets or through UAS overflight prior to landing.

As shown in Fig. 2, UGOMS leverages inertial measurement information, optical sensing, and GPS information when available. The machine vision system processes imagery collected by a strap-down nose-mounted electro-optical sensor to extract local information that is relevant to UAS airfield localization and surface guidance. Optically derived ego-motion estimates may be employed by the navigation filter to correct for IMU drift and bias, however, this is not currently done in the loosely coupled architecture presented in this paper. The taxiway mensuration process, described in more detail in Section III, employs Bayesian classification, morphological operations, and spatial memory of surface / material appearance statistics to identify taxiway regions, e.g., pavement, edge / center lines, and non-pavement, and estimate linear ground distance between the UAS and features (edges) of interest. Such information provides a source of measurements for Markov localization and error signals for airfield guidance. Marker recognition and obstacle detection subsystems alert the UGOMS guidance system, providing information both on what has been recognized / detected and where it is located in the local scene.

Aircraft navigation on the airport surface is provided by loosely coupled fusion of an inertial navigation system

(INS) and a particle filter performing Markov localization. Short term position updates are provided at 50-200 Hz by integration of IMU sensor data. The output of the INS is then fed into the Sensor Fusion and Navigation Filter which

J".51)+P+@=7B4+"%/).1(/)#+*"1)6/3A+"%/$+6511)%/+$1+05/51)+@%<(%%)*+2"161(0/+4A#/)<#+

American Institute of Aeronautics and Astronautics

6

uses a particle filter to provide long-term position updates at 0.5 – 2.0 Hz. In addition to the INS measurements, the particle filter uses body-fixed bearing and range data of runway surface elements derived through computer vision processing of the video sequence from the UA camera. The landmark identification component of the system relies on the airport database, which will be generated from existing databases or through the support of automation tools using aerial photography.

The UGOMS airfield guidance system receives UAS airfield localization estimates from the navigation filter and uses this information in conjunction with the UAS’s taxiway-relative position to plan and guide along taxi trajectories that are compliant with surface management policies and clearances from the air traffic control tower. Recognized markers, such as hold short lines, and obstacle detection events induce temporary interrupts in planned trajectories, enabling the UAS and human operator to tend to the initiating event prior to releasing the UAS back to its planned airfield trajectory. The UGOMS guidance system issues commands to the UAS’s steering, throttle, and braking control systems that may, at any time, be overridden by the operator-to-UAS command link.

III? O$<85/)1+!"#"$%+0$1+9(F"E(A+B)%#51(/"$%+

In this section we present details on the computer vision approach used to obtain relative position measurements between the UAS and taxiway edges. We refer to this process as taxiway mensuration. Sets of such measurements are employed during the airfield localization process to update particle weights. The set of local heading angles for which range is estimated can be dynamically configured as a function of both perceived (estimated) and anticipated taxiway structure obtained via spatial query into the airfield database.

The basic process for taxiway analysis is summarized in Fig. 3. A standard Bayesian pixel classification process17,18 is employed to label portions of the scene as taxiway, non-taxiway, and marker / lines. Probability density functions (PDFs) for these classes are retrieved from the airfield database as a function of the UAS’s airfield position. Once Bayesian pixel classification is complete, a series of morphological operations is applied to the “class image” in an effort to correct misclassifications using simple knowledge regarding the structure of the environment proximal to the UAS. The corrected class image may be used to update pixel class PDFs stored in the airfield database. The UAS’s distance from taxiway edges along selected relative heading angles is computed from the class image by searching along image plane trajectories that correspond to linear trajectories along the ground plane. It is important to note that our initial formulation ignores lens effects, which may impart nontrivial image distortion as the field of view is configured to enable a single strap-down camera to observe sufficient airfield surface in close proximity to the UAS.

The proximity of the camera to the ground plane introduces a number of interesting challenges as surface regions closer to the camera experience greater blur (smoothing) during UAS motion while those farther away suffer primarily from the effects of limited resolution. Under most circumstances, the computer vision system may concentrate its efforts on portions of the image space that correspond to the local ground plane directly in front of the UAS. However, if UGOMS is employed during takeoff and landing procedures, then the computer vision system must be adjusted in a manner that provides collision threat and runway deviation detections with sufficient lead time for the UAS to select and execute reactive maneuvers consistent with its kinematic and dynamic equations of motion.

American Institute of Aeronautics and Astronautics

7

2? '(A)#"(%+O3(##"0"6(/"$%++

A number of colorspaces can be used for modeling the color histogram of the taxiway surface including RGB, normalized RGB, HSI, JCrCb, etc.17 provides a useful comparison of different colorspaces that can be employed. In our approach, we use the normalized RGB colorspace, which has the advantage of being invariant to changes in orientation relative to the light source. The normalized RGB values N&1D1CO for a pixel can be calculated from the input values N?1P12O using the following equations:

2P?

CC2P?

PD2P?

?&""

#""

#""

# (1)

To classify an image and segment the taxiway, a Bayesian likelihood approach is used similar to the classifier described in Ref. 17. Given a pixel of normalized RGB color :, the probability that the pixel belongs to the taxiway is given by Bayes rule:

$ % $ % $ %$ % $ % $ % $ %(/Q#R/)K(/Q#R/):K(/Q#R/)K(/Q#R/):K

(/Q#R/)K(/Q#R/):K:(/Q#R/)Koo

oo"

# (2)

where the probability )o( (/Q#R/):K of the color : given that the pixel belongs to the taxiway and the probability )o( (/Q#R/):K of the color : given that the pixel does not belong to the taxiway are computed using histograms of

taxiway and non-taxiway pixels, respectively. The marginal probabilities )((/Q#R/)K and )((/Q#R/)K can be estimated from the overall probabilities of taxiway and non-taxiway surface contained in a training set. Application of UGOMS technology to either a known airfield or an airfield region previously visited by the UAS can enable one to model both the histograms and marginal probabilities as a function of airfield location. In this sense, the UAS’s airfield position and heading angle is used to define a query into the airfield database, which returns the most relevant histograms and marginal probabilities using a distance metric. Taking this a step further, the marginal probabilities may also be used to propagate pixel classification results from one cycle to the next, enabling one to employ a predictor-corrector scheme akin to that found in Kalman filtering.

J".51)+ Q?+ @=7B4+ )#/"<(/)#+ "/#+ 1(%.)+ /$+ /(F"E(A+ )*.)#+ (3$%.+ (+ #)/+ $0+ 1)3(/"-)+ :)(*"%.+ (%.3)#R+6$%#/156/"%.+ (+ 8#)5*$+ 1(%.)+ <(8+ /$+ #588$1/+ /:)+ ("10")3*+ 3$6(3"H(/"$%+ 81$6)##?+ + 2#+ #:$E%+ (;$-)R+'(A)#"(%+ 8"F)3+ 63(##"0"6(/"$%+ )<83$A#+ (88)(1(%6)+ 81$;(;"3"/A+ *)%#"/A+ 05%6/"$%#+ /:(/+ (1)+ #8(/"(33A+1)6(33)*+01$<+/:)+("10")3*+*(/(;(#)+5#"%.+/:)+@=7B4+%(-".(/"$%+#$35/"$%?++S11$1#+"%+/:)+(88)(1(%6)+;(#)*+8"F)3+63(##"0"6(/"$%#+(1)+6$11)6/)*+5#"%.+<$18:$3$."6(3+$8)1(/"$%#+81"$1+/$+)#/"<(/"%.+*"#/(%6)#+;)/E))%+/:)+@24+(%*+/(F"E(A+)*.)#+(3$%.+#)3)6/)*+1)3(/"-)+:)(*"%.+(%.3)#?+

American Institute of Aeronautics and Astronautics

8

UGOMS currently employs a sequence of binary pixel classifications to label (segment) an image. In this approach, comparisons are made between Bayesian probabilities given by Eq. (2). In particular, we look at the following ratio of probabilities

$ %$ %

$ % $ %$ % $ %(/Q#R/)K(/Q#R/):K

(/Q#R/)K(/Q#R/):K:(/Q#R/)K:(/Q#R/)K

oo

oo

# (3)

and ask whether this ratio exceeds a minimum threshold for declaring the respective pixel a member of the taxiway class. The form of Eq. (3) allows us to pose this decision in terms of the standard likelihood ratio, i.e.,

&')o()o(

(/Q#R/):K(/Q#R/):K (4)

Selecting the threshold as 1#& , we obtain a maximum / M+'(%&#+&# (MAP) estimate of the class to which a pixel of normalized RGB color : belongs. When the likelihood ratio in Eq. (4) is greater than one, the pixel is classified as taxiway and classified as non-taxiway for values less than or equal to one. This classification process may also be used to classify taxiway pixels as either pavement or non-pavement and, continuing further, to classify non-pavement pixels as either line / marker (painted) or unknown.

To construct the probability distributions KN:S(/Q#R/)O and )o( (/Q#R/):K from the normalized RGB image, the taxiway is manually extracted and a 3-dimensional histogram of the airfield surface is composed. This process constitutes a learning phase during which the statistical characteristics of materials are measured and stored for future reference. Histograms are normalized to create probability distributions, i.e.,

T+&U

:(/Q#R/)(/Q#R/):K pq)o( # (5)

where (/Q#R/)V:W is the histogram bin count for color vector : that is known to be of class taxiway. T+&U is a normalization coefficient most commonly selected as the total number of binned color vectors. )o( (/Q#R/):K is calculated in analogous fashion using known non-taxiway pixels. Figure 4 illustrates typical classification results obtained using the procedure detailed above. In the figure, white denotes pavement class pixels and black denotes non-pavement class pixels.

American Institute of Aeronautics and Astronautics

9

'? B$18:$3$."6(3+N)0"%)<)%/+

From Fig. 4 it can be seen that a number of pixel fragments (small clusters) may be misclassified using appearance information alone. Also, larger “holes” may arise due to the presence of objects on the taxiway or, as in Fig. 4, from intentional classification of the image into pavement / non-pavement classes. While median filtering can be quite effective in removing outliers, its effectiveness rapidly degrades as the definition of an outlier loses meaning when many such outliers (misclassifications) begin to form clusters. Further, application of a median filter to the entire image will introduce small changes to taxiway edge locations, which are precisely what UGOMS seeks to measure with maximum accuracy.

Morphological operations20 can applied to the binary “class image” as a means of imposing structural constraints on the space of feasible class images. To this end, we make the assumption that the image region closest to the UAS, which is of principal interest when generating the angle-range map, is :+"(#"3+3', i.e., fully connected through same-class pixel adjacency, and I+U+D%"%+3' in the sense that no sub-regions of class non-taxiway should exist, i.e., a contour of taxiway class pixels should not enclose a cluster of non-taxiway pixels. These rules are applied only to a local neighborhood about the UAS because distant regions of the airfield contain significant complexity with regards to the spatial distribution of pixel classes.

!"#$%&'()"#(*&+%$,,&-.$/(&

01"/"#$%&2"1)"(%*&-.$/(

3"#/%(4+5.65#(#7&+%$,,&-.$/(&

J".51)+ T?+ I<(.)+ #).<)%/(/"$%+ 1)#53/#+ $;/("%)*+ E"/:+ #"<83)+ '(A)#"(%+ 63(##"0"6(/"$%+ 0$33$E)*+ ;A+<$18:$3$."6(3+1)0"%)<)%/R+E:"6:+"<8$#)#+G%$E3)*.)+6$%#/1("%/#+$%+/:)+63(##"0"6(/"$%+#$35/"$%+#8(6)?+

89%7"4+5.65#(#7&+%$,,&-.$/(&

American Institute of Aeronautics and Astronautics

10

We begin by applying a connective components algorithm to the class image19, which further reinforces our proximal continuity rule by retaining only the largest closest connective (continuous) region of taxiway pixels that has at least one pixel in common with the bottom row of the image and whose distance from the UAS is measured according to the number of bottom-row pixels between the respective component and center bottom-row pixel. The output from this step is a binary image featuring only one connective component with a number of potential “holes” due to earlier misclassifications. Figure 4 illustrates the significance of the connective component pre-processing.

Iterative morphology operations are subsequently applied to identify and fill all gaps that are contained within the taxiway component. In particular, we apply the following refinement process:

(/Q#R/)XX Y2ZZ !)( 1 (# ) (6)

where Y(/Q#RR/) is the inverted class image of the airfield, i.e., pixel value = 1 if not taxiway, ! is the intersection operator implying logical AND between corresponding elements of the image matrix, ( is the dilation operator, Z6

is the seed image that is initialized as a null image of same dimension as the class image with a 1-pixel thick border (value = 1), and B is the 4 connectivity matrix given by

***

+

,

---

.

/#

010111010

2 (7)

The morphology operation is continually applied until ZX[ZX\=, at which time ZX is inverted to obtain the “filled” taxiway region. This procedure can be thought of as one that “floods” inward from the periphery through all non-taxiway class pixel regions and those non-taxiway clusters not reached by this constrained flow are reassigned as taxiway class pixels. As shown in Fig. 4, the final result yields a continuous homogeneous taxiway region whose edges remain sharp.

O? 9(F"E(A+S*.)+B)%#51(/"$%+

The final refined class image is analyzed to extract range and bearing estimates between the UAS and taxiway edges. One may search along image plane trajectories that correspond to ground plane lines drawn from the UAS outward at a relative heading angle. The point at which the pixel class transitions from taxiway to non-taxiway is selected as the edge location and then the ground distance to that point is computed assuming a flat earth model. To accomplish this, a 3-D ray from focal point through the edge point on the image plane intersects the ground plane and distance to that point of intersection is computed.

In the current UGOMS system, we assume a single “forward facing” strap-down camera that may be depressed or elevated about its local y-axis (Fig. 5) by an angle 0 . Given an image plane point $ %#

M#M )Q , , we compute the

coordinates $ %DM

DM

DM ])Q ,, of the corresponding ground point MZ and then obtain the range and bearing to that

ground location. A camera whose coordinate system origin is elevated above ground level by an amount I with no global offsets in the global x or global y directions views the global point MZ with camera frame coordinates $ %:

M:M

:M ])Q ,,

$ % $ %

$ % $ % DM

:M

DM

:M

DM

:M

)I]

Q)

)IQ

00

00

cossin

sincos

"#

#

")# (8)

The image plane coordinates are found using a simple camera model:

American Institute of Aeronautics and Astronautics

11

+:M):

M

#M

+:M:

M

:MQ:

M

#M

))U],

,)

Q)],

'QU],

,Q

""

#

""

""

# (9)

where , is the focal length, ),( )Q UU is the physical pixel size, ' is the skew factor, and ),( ++ )Q is the principal point, all of which can be found through camera calibration. Eqs. (8) and (9) can be used to uniquely determine the corresponding global coordinates through solution of a simple linear system of equations:

$ %$ %

$ %$ %$ %$ % 1

12

3445

6

)"

")"

**+

,

--.

/

)

)"#1

12

3445

6)

+#M

Q+#M

#M+)

#M+Q

DM

DM

))I',I:,UQQI',

)):,UQQ:',U'

)Q

000

000

1

(10)

where 0' and 0: are sineN0O and cosineN0O, respectively. There can be times for which the global coordinates of the ground point are computed as behind the camera, such as when a viewed point does not reside on the ground plane and when camera orientation error grows sufficiently high. This case can be specifically tested for on a pixel-wise basis and handled as an exception or simply discarded from analysis. From ( D

MDM )Q , ) computed in Eq. (10),

range and bearing are generated:

$ % $ % $ %

$ %11

2

3

44

5

6)#

"#

DM

DMD

MDM

DM

DM

DM

DM

)

Q)Q2

)Q)Q?

arctan,

,22

(11)

Bearing angle is defined relative to the forward facing y-axis of a right hand coordinate system whose x-y plane is parallel to the surface and origin coincident with that of the image plane. One can either search along specific radials on the image plane to determine taxiway boundary pixels and then apply the above technique to obtain range and bearing measurements, or, alternatively, use a randomized sampling approach that selects vertical columns of the image for analysis and then statistically searches upward (near to far in global coordinates) to identify taxiway boundary pixels that are subsequently analyzed as above. In practice, computational simplicity is essential, often making randomized column-wise boundary search the method of choice.

Focal point 0

,

:M]:

M)

:)s

:]s

:Qs:MQ

$ %DM

DM

DM ])Q ,,

D]s

D)s

DQs

#M)

#MQ

J".51) U? 4"<83)+6(<)1(+<$*)3?

American Institute of Aeronautics and Astronautics

12

I!? 2"10")3*+,(-".(/"$%+

Airfield navigation is achieved through a loosely-coupled closed-loop architecture that combines inertial measurements with data provided by the computer vision system described in the previous section (Fig. 6). The key component of this architecture is a Markov localization algorithm, based on a particle filter implementation, which provides global position determination on the airfield surface referenced to the layout database and relative position to runway surface elements such as taxiway edges and hold short lines. The algorithm is .++'%.) :+3M.%0 in the sense that the output of the inertial navigation system is compared against the output of the Markov localization process and the difference is used as a measurement in a Kalman Filter that estimates error states, including rate gyro and accelerometer biases.21 In the navigation architecture presented here, the inputs into the Markov localization algorithm are the displacement vector from the inertial navigation solution (including position and orientation), the airfield layout database, and computer vision measurements transformed into “visual sonar” data.13,14

The navigation architecture follows the standard loosely-coupled closed-loop approach to GPS/INS integration21

with the output of the Markov localization fulfilling the role of the GPS position measurements. The INS solution is generated from IMU measurements of acceleration and angular rates through standard mechanization equations. (Details are omitted here and can be found in Ref. 21). Perturbation of the INS mechanization equations is used to derive a linear set of differential equations that approximately describe how navigation errors propagate over time. These equations are used in a Kalman filter to estimate the navigation errors. UGOMS uses the difference between the position update from the Markov localization system and the INS as well as a non-holonomic velocity constraint to reduce the accumulation of navigation error in the Kalman update step. It must be noted that in the standard integration the GPS and INS outputs are independent while the Markov localization uses the INS output as its action model and is therefore dependent on it. Proper integration of the INS and Markov localization outputs in the Kalman filter require a measure of the correlation between the navigation errors and the particle filter output. The system presented here assumes that this correlation is small and that it can be ignored.

The basic particle filter algorithm is depicted in Figure 7. The filter works by representing the probability distribution of the aircraft state vector 7 8<XXXX )Q 9,,#F as a set of weighted particles that combine to give a discrete probability density function. Let : ;T

##X

#X

TX RK 1, ## F represent a set of T particles where #

XF is the state vector associated with the #th particle and #

XR is its scalar weight with < 1##XR . Like most recursive estimation

algorithms, the particle filter is run in two main steps. First, the /:(#+" U+0%. is used to propagate the particle set forward in time by drawing a new set of particles from the distribution $ %111 ,ot """ X

#XX

#X /M FFF where 1"X/ is the

system input or action. Next, the '%"'+& U+0%. is used to update the weights of each particle $ %#XX

#X MR 111 o """ # FH

based on the current observation or measurement XH . Finally, particles and their weights are resampled and

<=

J".51)+V?+W$$#)3A+6$583)*+%(-".(/"$%+(16:"/)6/51)?+

INS Mechanization

(50-200 Hz)

Markov Localization (0.5-2 Hz)

IMU

Airfield Database

Vision

(mR >m

?Q! ?)! ?91R @

KU/MKalman Filter

(50-200 Hz)

FDR u

u <=

u

-<=

AQ! A)! A9FMLAF

F

No Slip

$./(

0.0 u-

AFR A;gyroR A;accB=A?FR

A$.

American Institute of Aeronautics and Astronautics

13

normalized based on different criteria. Detailed discussion of implementation of a particle filter may be found in Ref. 15 and Ref. 16.

2? 26/"$%+B$*)3+

The output of an inertial navigation system is used to define the action model of the particle filter. Thus taking YT>XX F( ?# to be the displacement vector calculated by the INS we assume

$ %XXXXXX F()(FF ,1111 """" ""# (12)

where $ %XXX F() ,11 "" is dead reckoning error from the INS that can be dependent on the specific action(s) of the aircraft. In many particle filter implementations the integration of the inertial measurements is included in the action model in order to calculate accurate representations of this noise term. Here, we assume a separate INS processes all inertial measurements and that its (action-dependent) error distribution $ %XXXM F() ,o 11 "" can be calculated in advance. For each particle in the initial set T

XK this action model is used to generate a new particle sample such that

#XX

#X

#X 111

t""" ""# )(FF (13)

where $ %#XXX

#X M F()) ,ott

111 """ is drawn from the process error distribution. Adding this term has the effect of “spreading out” the new particle set : ;T##

X#X

TX R; 111 , #"" # F compared to the old one T

XK .

'? 4)%#$1+B$*)3+

The computer vision system (Section II) identifies runway surface elements and transforms them into “visual sonar” measurements (Fig. 8) in body-fixed polar coordinates, i.e. HX = q U

XX HH ,,1 " p< where #XH = q #

X#X& 0, p<. The

number of measurements U and the specific bearing angles varies with each sample time. Computer vision is inherently a bearing sensor with range to objects determined by known geometric properties. In our case we assume all camera parameters are known and that objects are located on the ground, allowing range to be calculated as the intersection between the ground plane and a 3-D ray drawn from the object position in the image frame.

J".51)+X?+'(#"6+4(<83"%.+I<8$1/(%6)+N)#(<83"%. C4IND 8(1/"63)+0"3/)1+(3.$1"/:<?+

American Institute of Aeronautics and Astronautics

14

For the purposes of developing a probabilistic model of the sensor, we assume bearing measurements are exact

and that the range measurement is corrupted by zero mean noise with a standard deviation that is a function of the range to the object. In other words, we assume

^X

^X

^X -AH "# (14)

where ^XA is the actual obstacle position and ^

X- is sensor noise with range variance !&. Range-bearing measurements possess uncertainty due largely to imager quantization effects and pixel

classification errors. The distribution of this uncertainty is nonuniform over the image plane and varies as a function of the relative viewing geometry between a surface patch and the pixel it illuminates. As commonly encountered in imaging applications, ranging uncertainty varies as a polynomial function of range and is in fact a quartic polynomial for this application.

The initial model we have constructed utilizes several simplifying assumptions: 1.) Pixels are assigned to a class (taxiway, non-taxiway) if more than 50% of the incident light can be attributed to that class, 2.) On the image plane, range varies principally as a function of pixel row number and bearing varies principally as a function of column number. This can be easily verified by evaluating Eqs. (10) and (11) across the image plane. From this assumption, we examine how range uncertainty arises from quantization over image rows and how bearing uncertainty arises from quantization effects over image columns.

On the image plane, the threshold for pixel classification induces uncertainty about the actual location of the taxiway boundary. Given a pixel N^1XO identified as a boundary pixel, one cannot ascertain whether the light illuminating pixel N^1XO minimally exceeds the classification threshold or completely fills this pixel and minimally misses classification threshold for the adjacent pixel as show in Fig. 9. As such, there exists a one pixel sized uncertainty region about the identified boundary pixel as shown in Fig. 9. Because we search up image columns to detect boundaries and range varies principally within columns, we consider an uncertainty region of one pixel for

J".51)+Y?+I<(.)+<)(#51)<)%/#+$0+15%E(A+#510(6)+)3)<)%/#+(1)+6$%-)1/)*+"%/$+-"#5(3+#$%(1+<)(#51)<)%/#+$0+;$*A&0"F)*+;)(1"%.+(%*+1(%.)?+

1:

@:

J".51)+Z?++9E$+>"F)3#+$0+5%6)1/("%/A+"%+;)(1"%.+C1$ED+(%*+K+8"F)3+5%6)1/("%/A+"%+1(%.)+C6$35<%D+

American Institute of Aeronautics and Astronautics

15

range. Across the image plane, Eq (10) and Eq. (11) are used to project the pixel-wise uncertainty to the ground plane and the differences in ranges to projected points define the 2-sigma value for a Gaussian distribution centered on the mean range, as illustrated in Figure 10. For the current endeavor, a nonlinear least squares routine was used to fit a 4th order polynomial in range to the range variance values along zero-degree bearing. For a 640x1024 pixel image plane with 4.65 micrometer pixels with zero skew and 12.5 mm focal length, the polynomial was found to be:

$ %

61586.282517.1

2

1

22

41

2

)#)#"#

%%

&&&&

CC

CCD (15)

Calculation of the probability $ %#XXM FH o of the measurement coming from a given particle is broken down into

several steps. First, the individual range-bearing pairs ^XH are used to generate pseudorange measurements ^#

X&, by

ray tracing within the airfield layout database along the given bearing angles ^X@ from each state #

XF . Next, a Gaussian probability distribution $ %#

X&M Fo for the range is created with mean ^#X&

, and standard deviation $ %^#X& & ,D .

The value $ %#X

^X]M Fo of this probability density function evaluated at the measured range is then calculated using

a normal distribution probability density function (Fig. 11)

$ % $ %$ %

$ %

$ % ED

D

2oo ,

2 ,

2,

^#X&

&&]

#X

^X

#X

^X &

%Q]&MQ]M^#

X&

^#X

^X ))

### . (16)

J".51)+K[?++>"F)3+5%6)1/("%/A+$%+#510(6)+81$\)6/"$%?+

#^& ^]

J".51)+KK?+ +>(1/"63)+E)".:/"%.+0$1+#)%#$1+<$*)3+#:$E"%.+8(1/"63)+C;35)D+(%*+1$;$/+C;3(6GD+(%*+/:)"1+<(88)*+(%*+#)%#)*+*"#/(%6)#+1)#8)6/"-)3A?+

particle

measurement

American Institute of Aeronautics and Astronautics

16

Next, the probability of all the measurements given the particle is calculated as the product of the individual terms

$ % F#

##U

^

^X

#XX

#X M

1

o EG FH . (17)

Finally, the weights are renormalized to give final values

H#

# T

#

#X

#X#

XR

1

G

G (18)

and the new particle set : ;T#

#X

#X

TX R> 1111 , #""" # F

O? 4(<83"%.+I<8$1/(%6)+N)#(<83"%.+

Resampling is performed to remove particles from the tail of the probability distribution and “condense” the particle set. The algorithm is described in detail in Fig. 7. T particles are drawn randomly from T

X> 1" using their weights as probability values. Multiple copies of the same particle are allowed, thus higher weighted particles are more likely to be selected repeatedly than lesser weighted particles. The weights from this new group of particles are normalized again to yield the new particle set : ;T

##X

#X

TX RK 1111 , #""" # F which is the input to the next iteration of the

filter.

!? 4"<53(/"$%#

In order to simulate the performance of the navigation filter, an airfield layout database was constructed by hand based on aerial imagery of the Leesburg Executive Airport in Leesburg, VA. The simple map consists only of the main taxiways and runway. The parking lots, hangars, and other structures and surfaces off the main taxiway and runway are not modeled. This provides a simple map that also allows for inclusion of several degenerate movements that will test the performance of the filter. Figure 12 shows the original aerial image of the Leesburg airport and the resulting airfield layout which is stored as a set of polygons denoted by an ordered list of their vertices.

For the simulations, visual sonar data is replicated using a simple ray-tracing algorithm22 to determine the distance to the nearest feature in the layout database along a selected bearing line. The bearing lines are chosen randomly with a maximum of five used at any given iteration of the filter. The simulated sensor data includes measurement noise which is added according to the sensor model given above.

In accordance with the system architecture, the inputs to the particle filter’s action model are the INS state change between vision observations. The aircraft’s accelerations and turn rate are derived directly from its trajectory (at 100 Hz) with the noise and bias terms of the IMU added to create the measurements that are fed into the INS mechanization equations. In particular, the simulations used accelerometer noise of 0.0984 ft/s2 standard deviation and 0.0322 ft/s2/s bias along with gyro noise of 8.7266e-4 rad standard deviation and 4.8485e-6 rad/s bias. The output of the INS mechanization and Kalman Filter in the navigation architecture give the state change and error covariance, respectively, which are fed into the particle filter (at 1 Hz). The complete aircraft trajectory and the action variables (state changes between vision updates) are calculated in advance so that changes within the particle filter algorithm can be observed without needing to take into account how the data has changed.

American Institute of Aeronautics and Astronautics

17

J".51)+KP?++W))#;51.+SF)65/"-)+2"18$1/+"%+W))#;51.R+!2+C3)0/DR+#)3)6/)*+<$*)3+(1)(+C6)%/)1DR+(%*+0"%(3+W))#;51.+("10")3*+3(A$5/+*(/(;(#)+(#+"<83)<)%/)*+"%+#"<53(/"$%#+C1".:/D?+

J".51)+KQ?+ + (?D+2"161(0/+8(/:+$-)13("*+$%+("10")3*+*(/(;(#)+(%*+;?D+%$1<+$0+ /:)+8$#"/"$%+)11$1+-)1#5#+/"<)+0533+W))#;51.+#"<53(/"$%?

a. b.

American Institute of Aeronautics and Astronautics

18

In order to fairly assess the filter’s performance, the simulation platform was kept constant to assure accurate comparison of the computational effort necessary for running with different numbers of particles. The test bed was a Dell Dimension D620 ATG running Gentoo Linux 2.6.20 with Fluxbox window manager to interact with MATLAB. In all of these MATLAB simulations, the tests were conducted with the exact same computer configuration with no extraneous programs running. An eight minute taxi scenario was simulated consisting of an initialization, run down a taxiway, two stops, and acceleration into takeoff. Figure 13a shows a plot of the entire taxi scenario overlaid on the airfield database. Figure 13b shows the norm of the position error versus time for dead reckoning (i.e., using the INS mechanization equations only) and for the particle filter. Figure 13 highlights various times within the simulation, showing how the simulations were initialized (Fig. 14a), a non-converged solution when in a long corridor (Fig. 14b), a converged solution (Fig. 14c), and the particle distribution toward the end of the run after the turn at the bottom of the taxiway (Fig. 14d).

J".51)+KT?++4"<53(/"$%+#:$E"%.+(?D+"%"/"(3"H(/"$%R+;?D+*"-)1.)%6)R 6?D+6$%-)1.)%6)R+(%*+*?D+(0/)1 (+/51%?

a. b.

c. d.

(00 (*0 (20 (,0 ((0 (-0 (.0 (/0 (00 (10-2-0

-2.0

-2/0

-200

-210

-,00

-,*0

-,20

-,,0

-,(0

2345678759:;<8=

>345678759:;<8=

?@AB:C5DE8759?@AB:"58759FE@87DGB:H7G8B@:I687JE8BFE@87DGB6FE@87DGB:I687JE8BK:"58759!"#:I687JE8B!"#:"58759L7BM79N:OBE@79N6

200 210 ,00 ,*0 ,20 ,,0 ,(0 ,-0 ,.0 ,/0

,.20

,.,0

,.(0

,.-0

,..0

,./0

,.00

,.10

,/00

,/*0

2345678759:;<8=

>345678759:;<8=

200 2*0 220 2,0 2(0 2-0 2.0 2/0 200 210

0

*0

20

,0

(0

-0

.0

/0

00

10

2345678759:;<8=

>345678759:;<8=

200 210 ,00 ,*0 ,20 ,,0 ,(0 ,-0 ,.0 ,/0

,.00

,.10

,/00

,/*0

,/20

,/,0

,/(0

,/-0

,/.0

,//0

2345678759:;<8=

>345678759:;<8=

American Institute of Aeronautics and Astronautics

19

Table 1 shows the time required to run the simulation for different numbers of particles. As expected, increasing the number of particles also increases the computation time necessary to run through a full length simulation. Since there is only one loop within the algorithm, the amount of time grows linearly with the number of particles. Note, simulation code was not optimized and run-time performance will be significantly different when the filter is deployed on field hardware, so the computational time should only be used to compare performance and not to draw conclusions on expected actual performance.

Particles Time to run (s) 100 373 200 500 500 862

1000 1464 Table 1. Particles and time required to run full Leesburg simulation.

Another major performance metric is the error in the system. In order to display a measure of the total error for a simulation run, the root-mean-square of the error (RMSE) was used. In order to assess the filter performance 100 simulations were run with the different number of particles given in Table 1. Figure 15 shows all 100 runs plus the average estimate versus time for the different number of particle. Figure 16a shows the mean error performance of the filter over 100 runs plotted against time for different number of particles. Figure 16b shows the RMSE for 100 runs against the number of particles used in the system. The plot clearly shows asymptotic behavior of the number of particles required to maintain localization.

J".51)+KU?+N)#53/#+$0+K[[+15%#+E"/:+(?D+U[R+;?D+K[[R+6?D+P[[R+(%*+*?D+KR[[[+8(1/"63)#?+9:)+*(1G+#$3"*+3"%)+1)81)#)%/#+/:)+(-)1(.)+/:)+K[[+15%#?+

0 50 100 150 200 250 300 350 400 4500

50

100

150

Time (s)

Erro

r (ft)

0 50 100 150 200 250 300 350 400 4500

50

100

150

Time (s)

Erro

r (ft)

0 50 100 150 200 250 300 350 400 4500

50

100

150

Time (s)

Erro

r (ft)

200 particles

0 50 100 150 200 250 300 350 400 4500

50

100

150

Time (s)

Erro

r (ft)

1000 particles

a. b.

c. d.

American Institute of Aeronautics and Astronautics

20

From Fig. 16b, the performance of only 50 or 100 particles is noticeably less converged than the performance of

200, 500, or 1000 particles. This is also reflected in the RMSE plot in Figure 16. From these two plots, the asymptotic performance of the filter is apparent. Above a certain threshold, 200 particles in this case, more particles will not greatly increase performance, and will only degrade the computation time. For this simulation, running only 200 particles would be sufficient.

In order to assess the performance of the filter overall, 1000 particles were run and the error was computed for the run in the x-direction, y-direction, and its norm error. Figure 16a shows the result of this simulation. This data only represents one run of the simulation and is meant to highlight the performance of the filter in certain scenarios. In general, the filter performs better than the INS alone in that the RMSE is much less for the filter. Furthermore, the particle filter error is always less than the INS dead reckoning error.

The filter is not perfect and loses localization of the robot at certain times in the simulation. This is due to the degeneracy of the map. In the map, there are long “hallways” where the robot can see the side walls but cannot see far enough to a wall in front of it. This means that the robot will have good localization information in one direction only (perpendicular to the wall). In these instances, the particles will tend to diverge from the robot’s true location. This momentary loss can be quickly reversed when the robot approaches a wall perpendicular to the long hallway walls. Despite the fact that the particle estimate has drifted more than twenty feet from the truth, the filter is able to relocalize to within a few feet of truth. This can be easily seen in the plots at approximately 200, 290, and 370 seconds where the filter estimate error drops to zero.

J".51)+ KV?+ (?D+2-)1(.)+$0+ /:)+ )11$1+ (/+ )(6:+ /"<)+ #/)8+$-)1+ K[[+ #"<53(/"$%#]+;D+NB4S+-#?+%5<;)1+$0+8(1/"63)#+0$1+/:)+0533+W))#;51.+2"18$1/+#"<53(/"$%#?+

a. b.

0 50 100 150 200 250 300 350 400 450 5000

5

10

15

20

25

Time (s)

Erro

r (ft)

501002005001000

American Institute of Aeronautics and Astronautics

21

While the performance seems satisfactory, it is well known that particle filter performance can be improved in

some cases with the addition of more noise on the propagation of the particles.23 Here, the navigation filter error covariance is doubled before it is input into the. Figure 17b shows how the performance differs with the higher noise. In this case the filter stays convergent, but the performance is worse than the baseline case (Fig. 17a) and can be worse than the dead reckoning solution at times. Doing the opposite and lowering the action model noise will not always result in less RMSE and too little propagation noise may result in divergence because the smaller particle spread between steps will not allow the filter to converge correctly.

Similarly, for the sensor model, noise modeling can affect the performance of the filter23 as seen in Figure 18. In the case of the sensor model, too little noise allows the filter to converge too tightly around the estimate and essentially turns the probability density function into a point instead of well defined peaks and tails showing higher and lower likelihood. The data in Fig. 18 compares the nominal case to one where the particle filter assumes the sensor noise is )(2 &&D . Unlike the case with the action model noise, increasing the sensor noise term improves overall filter performance.

J".51)+KY?++O$<8(1"#$%+$0+/:)+)00)6/+$0+3$E+#)%#$1+<$*)3+%$"#)+C3)0/D+(%*+:".:+#)%#$1+<$*)3+%$"#)+C1".:/D+

0 20 40 60 80 100 1200

1

2

3

Erro

r (ft)

p

PF Estimate x!direction ErrorINS Estimate x!direction Error

0 20 40 60 80 100 1200

10

20

30

Erro

r (ft)

PF Estimate y!direction ErrorINS Estimate y!direction Error

0 20 40 60 80 100 1200

10

20

30

Time (s)

Erro

r (ft)

PF Estimate ErrorINS Estimate Error

0 20 40 60 80 100 1200

1

2

3

Erro

r (ft)

p

PF Estimate x!direction ErrorINS Estimate x!direction Error

0 20 40 60 80 100 1200

10

20

30

Erro

r (ft)

PF Estimate y!direction ErrorINS Estimate y!direction Error

0 20 40 60 80 100 1200

10

20

30

Time (s)

Erro

r (ft)

PF Estimate ErrorINS Estimate Error

J".51)+ KX?+ + 9:)+ )11$1+ $0+ 0533+ W))#;51.+ #"<53(/"$%+ (.("%#/+ /"<)+ #:$E"%.+ 0$1+ (?D+%$<"%(3+(%*+;?D+/E"6)+/:)+(6/"$%+<$*)3+%$"#)?+

0 50 100 150 200 250 300 350 400 450 5000

5

10

15

20Er

ror (

ft)

0 50 100 150 200 250 300 350 400 450 5000

50

100

150

Erro

r (ft)

0 50 100 150 200 250 300 350 400 450 5000

50

100

150

Time (s)

Erro

r (ft)

PF Estimate x!direction ErrorINS Estimate x!direction Error

PF Estimate y!direction ErrorINS Estimate y!direction Error

PF Estimate ErrorINS Estimate Error

0 50 100 150 200 250 300 350 400 450 5000

5

10

15

20

Erro

r (ft)

PF Estimate x!direction ErrorINS Estimate x!direction Error

0 50 100 150 200 250 300 350 400 450 5000

50

100

150

Erro

r (ft)

PF Estimate y!direction ErrorINS Estimate y!direction Error

0 50 100 150 200 250 300 350 400 450 5000

50

100

150

Time (s)

Erro

r (ft)

PF Estimate ErrorINS Estimate Error

a. b.

American Institute of Aeronautics and Astronautics

22

!I? O$%635#"$%

This paper presented a vision-based navigation algorithm for unmanned aircraft operation on the airfield surface. The driving requirement for the navigation architecture is operation in GPS-denied environments, motivating a passive vision-based solution. A loosely-coupled, closed-loop architecture was described that combines inertial measurements for short term navigation with computer vision measurements of runway surface elements for long term position determination referenced to an airfield layout database.

The computer vision system classifies pixels for use in the taxiway mensuration process. Prior statistics of the taxiway color histograms are used to classify taxiway/non-taxiway pixels. Morphological operations then correct misclassified pixel regions and smooth the “class image”. Finally, known camera geometry transforms taxiway edge pixels from the image plane into the body-fixed ground plane. When pixel histogram data is not available in advance, the classification algorithm can develop sufficient statistics online.

The navigation architecture uses a particle filter to fuse the inertial and computer vision measurements. Since the airfield layout database provides an aerial view of the airfield surface with minimal visual information (i.e. for image registration), the computer vision data is converted into a “visual sonar” measurement that provides body-fixed bearing and range measurements to identifiable features. This measurement model allows the navigation architecture to use techniques developed for indoor navigation of mobile robots using sonar or laser ranging.

Simulation results highlight the capabilities of the navigation architecture. Dead reckoning (i.e. IMU only) navigation always eventually diverges and is therefore not appropriate for taxiway applications where the unmanned aircraft could be held still for significant amounts of time, for example at a hold short line while other aircraft take off. Inclusion of the particle filter provides navigation accuracies on the order of 6.0 meters. The impact of the number of particles on filter performance was investigated through Monte Carlo simulation. Performance, as measured by the average root mean square error, was improved as the number of particles increased. However, the improvement was minor for more than 200 particles, suggesting a good balance between performance and computational effort can be found. The computational effort scales linearly with the number of particles and the absolute computational time depends heavily on software implementations of the filter algorithms, so it was not included in discussion here.

Future work will develop longer, more complex models and example scenarios to test the navigation architecture. Likewise, all algorithms will be converted to field code and tested on actual hardware. The main concern implementing the navigation algorithm will be the computational effort required to process large numbers of particles, specifically to perform the ray tracing that is needed as part of the sensor model.

N)0)1)%6)#+

1G. Loegering and S. Harris, “Landing Dispersion Results – Global Hawk Auto-land System^ ;Y;; ='( !;E -+",%&%":%, Portsmouth, Virginia, May 20-23, 2002, AIAA-2002-3457.

2Office of the Secretary of Defense, !"U/""%0 ;#&:&/,( >)'(%U ?+/0U/M_ 866G\8676, 2005. 3The DARPA Urban Challenge, http://www.darpa.mil/grandchallenge/, last visited 12/31/2007. 4S. Thrun, et. al. “Stanley: The robot that won the DARPA Grand Challenge,” `+3&"/. +, H#%.0 ?+C+(#:', v 23, n 9,

September, 2006, p 661-692. 5I. Miller, E. Garcia, and M. Campbell, “To drive is human: Robotic driving for the DAPRA Grand Challenge,” -+UM3(%&, v

39, n 12, Dec. 2006, p 52-6.6A.I. Mourikis and S.I. Roumeliotis, "A Multi-State Constrained Kalman filter for Vision-aided Inertial Navigation", In K&+:@

866F YAAA Y"(%&"/(#+"/. -+",%&%":% +" ?+C+(#:' /"0 ;3(+U/(#+" NY-?;a6FO, Rome, Italy, Apr. 10-14, pp. 3565-3572. 7J. W. Langelaan, “State estimation for autonomous flight in cluttered environments,” ;Y;; `+3&"/. +, P3#0/":%1 -+"(&+.

/"0 b)"/U#:', vol. 30, no. 5, September-October 2007. 8J. H. Kim and S. Sukkarieh, “Airborne simultaneous localisation and map building,” in YAAA Y"(%&"/(#+"/. -+",%&%":% +"

?+C+(#:' /"0 ;3(+U/(#+" NY-?;O. Taipei, Taiwan: IEEE, 2003.

American Institute of Aeronautics and Astronautics

23

9A. S. Watkins, J. J. Kehoe, and R. Lind, “SLAM for flight through urban environments using dimensionality reduction,” in -+..%:(#+" +, <%:I"#:/. K/M%&' \ ;Y;; P3#0/":%1 T/$#D/(#+"1 /"0 -+"(&+. -+",%&%":% 866c, vol. 8, Keystone, CO, United States, 2006, pp. 5018 – 5029.

10G. F. Ivey and E. Johnson, “Investigation of methods for simultaneous localization and mapping using vision sensors,” in -+..%:(#+" +, <%:I"#:/. K/M%&' \ ;Y;; P3#0/":%1 T/$#D/(#+"1 /"0 -+"(&+. -+",%&%":% 866c, vol. 6, Keystone, CO, United States, 2006, pp. 3977 – 3990.

11M. Bryson and S. Sukkarieh, “Building a robust implementation of bearing-only inertial SLAM for a UAV,” `+3&"/. +, H#%.0 ?+C+(#:', vol. 24, no. 1-2, pp. 113 – 143, 2007.

12P. Newman, D. Cole, and K. Ho, “Outdoor SLAM using visual appearance and laser ranging,” K&+:%%0#"D' 866c -+",%&%":% +" Y"(%&"/(#+"/. ?+C+(#:' /"0 ;3(+U/(#+", 2006, 1180-7.

13J-H. Choi and S-J Oh, “Grid-based visual SLAM in complex environment,” 866c YAAAd?>` Y"(%&"/(#+"/. -+",%&%":% +" Y"(%..#D%"( ?+C+(' /"0 >)'(%U', 2006, pp. 2563-9.

14S. Lenseir and M.Veloso, " Visual sonar: fast obstacle avoidance using monocular vision," K&+:%%0#"D' +, (I% 8667 YAAAd?>` Y"(@ -+",%&%":% +" Y"(%..#D%"( ?+C+(' /"0 >)'(%U', Vol. 1, October 2003, pp 27-31.

15Fox, D., Burgard, W., Thrun, S. “Markov Localization for Mobile Robots in Dynamic Environments,” `+3&"/. +, ;&(#,#:#/. Y"(%..#D%":% ?%'%/&:I, 11 (1999) pp. 391-427.

16Thrun, S., Fox, D., Burgard, W., Dellaert , F. “Robust Monte Carlo localization for mobile robots,” ;&(#,#:#/. Y"(%..#D%":%, 128 (2001) pp. 99-141.

17Vezhnevets, V., Sazonov, V., Andreeva, A., “A Survey on Pixel-Based Skin Color Techniques”, K&+:@ P&/MI#:+"\8667, pp 85-92, Moscow, Russia, 2003.

18xarit, B., Boaz, J., Quck, F., “Comparison of Five Color Models in Skin Pixel Classification”, K&+:@ 4, J+&X'I+M +" ?%:+D"#(#+"1 ;"/.)'#'1 /"0 <&/:X#"D +, H/:%' /"0 P%'(3&%' #" ?%/.\<#U% >)'(%U'1 =eee, pp 58-63, 1999.

19Grandis, M., “Ordinary and Directed Combinatorial Homotopy, Applied to Image Analysis and Concurrency,” Homology, Homotopy and Applications, v 5, n 2, pp 211-231, 2003.

20Chora, R., “Image coding by morphological skeleton transformation”, in -+UM3(%& ;"/.)'#' +, YU/D%' /"0 K/((%&"', Springer Berlin / Heidelberg, v 719, pp 216 – 222, 1993.

21Farrel, J. and Barth, M., <I% P.+C/. K+'#(#+"#"D >)'(%U f Y"%&(#/. T/$#D/(#+". McGraw-Hill Professional, New Jork, NJ, 1999.

22Press, W.H., Teulolsky, S.A., Vetterling, W.T., Flannery, B.P., T3U%&#:/. ?%:#M#%'_ <I% ;&( +, >:#%"(#,#: -+UM3(#"D@ Third Edition. Cambridge University Press, New Jork, NJ, 2007.

23S. Thrun, W. Burgard, and D. Fox. K&+C/C#.#'(#: ?+C+(#:'. MIT Press, Cambridge, MA, 2005.


Recommended