Date post: | 07-Apr-2018 |
Category: |
Documents |
Upload: | prasanna-ramamurthy |
View: | 221 times |
Download: | 0 times |
of 114
8/6/2019 Image Aided Inertial Nav
1/114
Tightly-CoupledImage-AidedInertialNavigationSystemviaaKalmanFilter
THESISMichaelG.Giebner,Captain,USAF
AFIT/GE/ENG/03-10
DEPARTMENTOFTHEAIRFORCEAIRUNIVERSITY
AIRFORCE INSTITUTEOFTECHNOLOGYWright-PattersonAirForceBase,Ohio
APPROVEDFORPUBLICRELEASE;DISTRIBUTIONUNLIMITED
8/6/2019 Image Aided Inertial Nav
2/114
The views expressed in this thesis are those of the author and do not reflect theofficialpolicyorpositionoftheUnitedStatesAirForce,DepartmentofDefenseortheUnitedStatesGovernment.
8/6/2019 Image Aided Inertial Nav
3/114
AFIT/GE/ENG/03-10
Tightly-CoupledImage-AidedInertialNavigationSystemviaaKalmanFilter
THESIS
PresentedtotheFacultyoftheDepartmentofElectricalandComputerEngineering
GraduateSchoolofEngineeringandManagementAirForceInstituteofTechnology
AirUniversity
InPartialFulfillmentofthe
RequirementsfortheDegreeofMasterofScienceinElectricalEngineering
MichaelG.Giebner,B.S.Captain,USAF
March,2003
Approvedforpublicrelease;distributionunlimited
8/6/2019 Image Aided Inertial Nav
4/114
8/6/2019 Image Aided Inertial Nav
5/114
AcknowledgementsI would like to acknowledge those who have helped me in the completion of thisthesis. Thanks go to Major John Raquet for his expertise, guidance, and encouragementinhelpingmakethisthesispossible. Withoutlonghoursandhisincredibleexpertise in debugging Matlabc code, I would still be attempting to defend thisthesis. Thanks to Capt John Erickson for his donation of a Matlabc generatedKalman filter. I would also like to thank the other faculty members of the AFITElectricalEngineeringdepartment,Dr. PeterMaybeck,LtColMikelMiller,andDr.MeirPachterfortheirexpertacademicinstructionandtheirpatiencewhendealingwithstudents.
TherearenotenoughwordsintheworldtoexpressthegratitudeIfeeltowardmy AFIT Peeps. I was lucky enough to become friends with a great group offolkswhomadetheAFITexperiencenotonlybearable,butevendownrightfunattimes. You guys rock: Heather Crooks, my Aussie pal Matt Paps Papaphotis,GregGregariousHibbiddyDibbiddyHoffman,DaveCrash,BitterDaveGaray,ChristopherHamiltonandKate,ChristineEllering,ScottandBonnieBergren,MikeandGinaHarvey,RayBuzzandTheresaToth,JamesHaneyandBethHanley,andanyoneelseImanagedtoleaveout(sorryaboutthat).
ThanksgototheboysofUSAFTestPilotSchoolclass02A.ThereisnowayI would have made it through the 21 year AFIT/TPS program without your help
2during the year at Edwards. You guys are the best. A special thanks goes to thePeepingTalonflighttestteamfrom02A:MajBillAjaxPeris,MajJeanBilger(thecoolestWSO inFrance), MajCharlesDrEEvilHavasy,CaptStephenPhurterFrank,CaptRonBattaSchwing,andCaptCliftonMojoJanney.
MichaelG.Giebner
iii
8/6/2019 Image Aided Inertial Nav
6/114
TableofContentsPage
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiLi st of A bbr e v i at i ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
1.1 Bac k gr ound . . . . . . . . . . . . . . . . . . . . . . . . 1-11.1.1 InertialNavigationSystems . . . . . . . . . . 1-21.1.2 GPS Systems . . . . . . . . . . . . . . . . . . 1-31.1.3 IntegratedINS/GPSSystems . . . . . . . . . 1-41.1.4 CurrentTrendinNavigationSystems . . . . . 1-51.1.5 HowGPSDependenceAffectsAircraft . . . . 1-51.1.6 HowGPSDependenceAffectsMunitions . . . 1-8
1.2 Problem to be Solved . . . . . . . . . . . . . . . . . . . 1-91.2.1 Do Visual Measurements Enhance Navigation
Accuracy? . . . . . . . . . . . . . . . . . . . . 1-91.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-91.4 Literature Review . . . . . . . . . . . . . . . . . . . . 1-10
1.4.1 NavigationUsingImagingTechniques . . . . . 1-101.5 Methodology . . . . . . . . . . . . . . . . . . . . . . . 1-11
iv
8/6/2019 Image Aided Inertial Nav
7/114
Page1.6 Assumptions . . . . . . . . . . . . . . . . . . . . . . . 1-121.7 Materials and Equipment . . . . . . . . . . . . . . . . 1-12
1.7.1 AirForceFlightTestCenter . . . . . . . . . . 1-121.7.2 Matlabc . . . . . . . . . . . . . . . . . . . . 1-121.7.3 T-38A Aircraft . . . . . . . . . . . . . . . . . 1-131.7.4 GAINR . . . . . . . . . . . . . . . . . . . . . 1-131.7.5 Ashtech GPS Receiver . . . . . . . . . . . . . 1-141.7.6 Camera . . . . . . . . . . . . . . . . . . . . . 1-141.7.7 Camera Lenses . . . . . . . . . . . . . . . . . 1-141.7.8 V i de o T ape s . . . . . . . . . . . . . . . . . . . 1-141.7.9 TimeCodeGenerator/Inserter. . . . . . . . . 1-141.7.10 Video Monitor . . . . . . . . . . . . . . . . . . 1-15
1.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 1-152. Background Theory . . . . . . . . . . . . . . . . . . . . . . . . 2-1
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . 2-12.2 Kalman Filter Basics . . . . . . . . . . . . . . . . . . . 2-1
2.2.1 Kalman Filter . . . . . . . . . . . . . . . . . . 2-12.2.2 Kalman Filter Equations . . . . . . . . . . . . 2-12.2.3 Extended Kalman Filter . . . . . . . . . . . . 2-42.2.4 State Vector . . . . . . . . . . . . . . . . . . . 2-52.2.5 State Covariance Matrix . . . . . . . . . . . . 2-62.2.6 DynamicsDrivingNoiseCovarianceMatrix . 2-72.2.7 State Transition Matrix . . . . . . . . . . . . 2-72.2.8 Measurement Matrices . . . . . . . . . . . . . 2-72.2.9 Measurement Noise . . . . . . . . . . . . . . . 2-72.2.10 Kalman Filter Cycle . . . . . . . . . . . . . . 2-7
2.3 Arc Length Equation . . . . . . . . . . . . . . . . . . . 2-9v
8/6/2019 Image Aided Inertial Nav
8/114
Page
2.4
2.5
2.6
2.3.1 MeridianRadiusofCurvature . . . . .2.3.2 TransverseRadiusofCurvature . . . .
Geometry . . . . . . . . . . . . . . . . . . . . .2.4.1 DerivativeofInverseTangentFunction
NavigationOrbitGeometryDetermination . . .2.5.1 Assumptions . . . . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . .
3.
Methodology
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . 2-9
. . . . 2-10
. . . . 2-10
. . . . 2-10
. . . . 2-12
. . . . 2-12
. . . . 2-12
. . . .
3-1
. . . . 3-1
. . . . 3-1
. . . . 3-2
. . . . 3-5
. . . . 3-5. . . 3-7
. . . . 3-8
. . . . 3-8
. . . . 3-13
. . . . 3-14
. . . . 3-15
3.13.23.33.4
3.5
3.63.7
Overview . . . . . . . . . . . . . . . . . . . . .Visual Measurements . . . . . . . . . . . . . . .VisualMeasurementGeneration. . . . . . . . .VisualMeasurementErrorEstimation . . . . .
3.4.1 GeneratingtheH(x) Matrix . . . . .VisualMeasurementGenerationusingaCamera
3.5.13.5.23.5.33.5.43.5.53.5.63.5.7
RequiredCameraSpecifications . . . .GenerateTargetLatitude/Longitude .GenerateMeasurementsfromanImageGenerateMeasurementEstimates . . .Camera-to-BodyDCMGeneration . .VisualMeasurementRMatrixGeneration . . 3-15CameraCalibrationProcedures . . . . . . . . 3-18
EstimationofAttitudeErrorStates. . . . . . . . . . . 3-22Summary . . . . . . . . . . . . . . . . . . . . . . . . . 3-23
4. Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 4-14.1 Sortie Overview . . . . . . . . . . . . . . . . . . . . . . 4-14.2 General Filter Comments . . . . . . . . . . . . . . . . 4-6
vi
8/6/2019 Image Aided Inertial Nav
9/114
Page4.34.44.54.64.74.84.94.104.114.124.134.14
UnaidedandBaroFilterCases . . . . . .Different Cases . . . . . . . . . . . . . . .PerfectMeasurementCase . . . . . . . . .Nominal Case . . . . . . . . . . . . . . . .IndividualAxisPositionComparison . . .PositionStateErrorComparison . . . . .Velocity Comparison . . . . . . . . . . . .VelocityStateErrorEstimateComparisonVisual Residuals . . . . . . . . . . . . . .RSensitivity Analysis . . . . . . . . . . .
. . . . . . . 4-6
. . . . . . . 4-7
. . . . . . . 4-9
. . . . . . . 4-10
. . . . . . . 4-12
. . . . . . . 4-16
. . . . . . . 4-19
. . . . . . . 4-20
. . . . . . . 4-24
. . . . . . . 4-28ErrorCorrelationwithAngularMeasurements . . . . . 4-32Summary . . . . . . . . . . . . . . . . . . . . . . . . . 4-36
5. Conclusions and Recommendations . . . . . . . . . . . . . . . .5.15.2
AppendixA.A.1A.2A.3A.4A.5A.6A.7
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 5-1Recommendations . . . . . . . . . . . . . . . . . . . . 5-1Flight Test Methodology . . . . . . . . . . . . . . . . . A-1Overview . . . . .SortieBreakdown .PriortoFlight . .Pre-flightandTaxiTakeoff . . . . . .
. . . . . . . . . . . . . . . . . . . . A-1
. . . . . . . . . . . . . . . . . . . . A-1
. . . . . . . . . . . . . . . . . . . . A-1
. . . . . . . . . . . . . . . . . . . . A-2
. . . . . . . . . . . . . . . . . . . . A-2Navigation Portion of Sortie . . . . . . . . . . . . . . . A-2Lessons Learned . . . . . . . . . . . . . . . . . . . . . A-3
A.7.1 Ground Test . . . . . . . . . . . . . . . . . . . A-3A.7.2 StationKeepingDuringCirclingNavigationMa
neuver . . . . . . . . . . . . . . . . . . . . . . A-4
vii
5-1
8/6/2019 Image Aided Inertial Nav
10/114
PageA.7.3 ThingsDoNotAlwaysWorkTheWayTheyAre
Supposed To Work . . . . . . . . . . . . . . . A-4A.7.4 SelectedTarget(s)BeingVisibleToCamera . A-5
A.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . A-6Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BIB-1Vita . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VITA-1
viii
8/6/2019 Image Aided Inertial Nav
11/114
Figure1.1.2.1.3.1.3.2.3.3.3.4.3.5.4.1.4.2.4.3.4.4.4.5.4.6.
4.7.4.8.4.9.4.10.4.11.4.12.
ListofFigures PagePEEPING TALON T-38 . . . . . . . . . . . . . . . . . . . . 1-13Kalman Filter Cycle . . . . . . . . . . . . . . . . . . . . . . . 2-8Sign Convention . . . . . . . . . . . . . . . . . . . . . . . . . 3-9D i s t a n c e s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10Angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-11Camera Calibration Target . . . . . . . . . . . . . . . . . . . 3-20Azimuth Error Surface . . . . . . . . . . . . . . . . . . . . . 3-21Ground Track . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2Altitude for Entire Sortie . . . . . . . . . . . . . . . . . . . . 4-2Ramp Area and Navigation Orbit . . . . . . . . . . . . . . . 4-4GPS Filter Navigation Orbit . . . . . . . . . . . . . . . . . . 4-5Navigation Orbit for Unaided Filter . . . . . . . . . . . . . . 4-53DPositionErrorforUnaidedFilter(top)andBaroFilter(bottom) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-8Case13DPositionErrorforBaro/VisFilter . . . . . . . 4-9Case 1 3D Position Error for Vis Filter . . . . . . . . . . . 4-9Case 2 3D Position Error for Baro Filter . . . . . . . . . . 4-10Case23DPositionErrorforBaro/VisFilter . . . . . . . 4-11Case 2 3D Position Error for Vis Filter . . . . . . . . . . . 4-11NorthPositionErrorforUnaidedFilter(top),BaroFilter(middle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-13
ix
8/6/2019 Image Aided Inertial Nav
12/114
Figure Page4.13. EastPositionErrorforUnaidedFilter(top),BaroFilter(mid
dle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-14
4.14. DownPositionErrorforUnaidedFilter(top),BaroFilter(middle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-15
4.15. State 1 (East) Position Error for Unaided Filter (top), BaroFilter(middle),andBaro/VisFilter(bottom) . . . . . . . . . 4-17
4.16. State 2 (North) Position Error for Unaided Filter (top), BaroFilter(middle),andBaro/VisFilter(bottom) . . . . . . . . . 4-18
4.17. NorthVelocityErrorforUnaidedFilter(top),BaroFilter(middle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-21
4.18. EastVelocityErrorforUnaidedFilter(top),BaroFilter(middle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-22
4.19. DownVelocityErrorforUnaidedFilter(top),BaroFilter(middle),andBaro/VisFilter(bottom). (DashedLinesIndicate1-Filter-CompensatedCovariance) . . . . . . . . . . . . . . 4-23
4.20. State 4 (North) Velocity Error for Unaided Filter (top), BaroFilter(middle),andBaro/VisFilter(bottom) . . . . . . . . . 4-25
4.21. State 5 (East) Velocity Error for Unaided Filter (top), BaroFilter(middle),andBaro/VisFilter(bottom) . .
4.22. VisualResidualsfortheBaro/VisFilter . . . . .4.23.
3D
Position
Error
for
Cases
1through
4
.
.
.
.
.
4.24. 3D Position Error for All Cases . . . . . . . . . . 4.25. Case2CostvsBenefitforBaro/VisFilter . .4.26. NorthPositionErrorandAngularMeasurements4.27. EastPositionErrorandAngularMeasurements .4.28. NorthandEastPositionError . . . . . . . . . .
. . . . . . . 4-26
. . . . . . . 4-27
. . . . . . .
4-29
. . . . . . . 4-30
. . . . . . . 4-31
. . . . . . . 4-33
. . . . . . . 4-34
. . . . . . . 4-35
x
8/6/2019 Image Aided Inertial Nav
13/114
Table3.1.3.2.4.1.4.2.4.3.A.1.
ListofTables PageNEDFrameElevationandAzimuthSense . . . . . . . . . . . 3-2CameraFrameElevationandAzimuthSense . . . . . . . . . 3-9Filter Configurations . . . . . . . . . . . . . . . . . . . . . . . 4-3Measurement Cases . . . . . . . . . . . . . . . . . . . . . . . 4-7Additional Measurement Cases . . . . . . . . . . . . . . . . . 4-28Modular Aircraft Components . . . . . . . . . . . . . . . . . A-4
xi
8/6/2019 Image Aided Inertial Nav
14/114
ListofAbbreviationsAbbreviation PageAFIT Air Force Institute of Technology . . . . . . . . . . . . . . . . . 1-1TPS USAF Test Pilot School . . . . . . . . . . . . . . . . . . . . . . . 1-1PVATPosition,Velocity,Attitude,andTime . . . . . . . . . . . . . . 1-2FLIR Forward-Looking Infrared . . . . . . . . . . . . . . . . . . . . . 1-3HUD Heads Up Display . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3LORAN Long-range Navigation . . . . . . . . . . . . . . . . . . . . . .
1-3
RADAR Radio Detection and Ranging . . . . . . . . . . . . . . . . . . 1-3TACAN Tactical Air Navigation . . . . . . . . . . . . . . . . . . . . . 1-3PVA Position, Velocity and Attitude . . . . . . . . . . . . . . . . . . . 1-4HF High-Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6ATC Air Traffic Control . . . . . . . . . . . . . . . . . . . . . . . . . . 1-6AWACSAirborneWarningandControlSystem . . . . . . . . . . . . . 1-7JDAM Joint Direct Attack Munition . . . . . . . . . . . . . . . . . . . 1-8JASSMJointAir-to-SurfaceStandoffMissile . . . . . . . . . . . . . . 1-8JSOW Joint Stand-Off Weapon . . . . . . . . . . . . . . . . . . . . . . 1-8AFFTC Air Force Flight Test Center . . . . . . . . . . . . . . . . . . . 1-12EKF Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . 2-4Rn Meridian Radius of Curvature . . . . . . . . . . . . . . . . . . . . . 2-9Re Transverse Radius of Curvature . . . . . . . . . . . . . . . . . . . . 2-10NED North - East - Down . . . . . . . . . . . . . . . . . . . . . . . . . 3-1DTED Digital Terrain Elevation Data . . . . . . . . . . . . . . . . . . 3-2DCM Direction Cosine Matrix . . . . . . . . . . . . . . . . . . . . . . 3-12SI Special Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . A-2RCA Radio Corporation of America . . . . . . . . . . . . . . . . . . . A-3
xii
8/6/2019 Image Aided Inertial Nav
15/114
ListofSymbolsSymbol PagexState Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5PState Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . 2-6Qd Dynamics Driving Noise Matrix . . . . . . . . . . . . . . . . . . . 2-7State Transition Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 2-7h(x) Nonlinear Measurement Matrix . . . . . . . . . . . . . . . . . . . 2-7H
Linearized Measurement Matrix . . . . . . . . . . . . . . . . . . . .
2-7
RMeasurement Noise Matrix . . . . . . . . . . . . . . . . . . . . . . . 2-7
xiii
8/6/2019 Image Aided Inertial Nav
16/114
AFIT/GE/ENG/03-10Abstract
InertialnavigationsystemsandGPSsystemshaverevolutionizedtheworldofnavigation. Inertial systems are incapable of beingjammed and are the backboneofmostnavigationsystems. GPS ishighlyaccurateover longperiodsoftime,anditisanexcellentaidtoinertialnavigationsystems. However,asamilitaryforcewemust be prepared to deal with the denial of the GPS signal. This thesis seeks todetermine
if,
via
simulation,
it
is
viable
to
aid
an
INS
with
visual
measurements.
Visualmeasurementsrepresentasourceofdatathatisessentiallyincapableofbeing
jammed,andassuchtheycouldbehighlyvaluableforimprovingnavigationaccuracyinamilitaryenvironment.
The simulated visual measurements are two angles formed from the aircraftwith respect to a target on the ground. Only one target is incorporated into thisresearch. FivedifferentmeasurementcombinationswereincorporatedintoaKalmanfilterand comparedto each other overa six-minute circular navigation orbit. Themeasurement combinations included were with respect to the navigation orbit: 1)GPS signals throughout the orbit, 2) no measurements during the orbit, 3) simulatedbarometricmeasurementsduringtheorbit,4)simulatedbarometricandvisualmeasurementsduringtheorbit,5)andvisualmeasurementsonlyduringtheorbit.
ThevisualmeasurementswereshowntosignificantlyimprovenavigationaccuracyduringaGPSoutage,decreasingthetotal3-dimensionalerroraftersixminuteswithoutGPSfrom350mto50m(withvisualmeasurements). Thebarometric/visualmeasurementformulationwasthemostaccuratenon-GPScombinationtested,withthe pure visual measurement formulation being the next best visual measurementconfiguration. TheresultsobtainedindicatevisualmeasurementscaneffectivelyaidanINS.Ideasforfollow-onworkarealsopresented.
xiv
8/6/2019 Image Aided Inertial Nav
17/114
Tightly-CoupledImage-AidedInertialNavigationSystemviaaKalmanFilter
1. IntroductionThis thesis is the culmination ofacourseof studyat the AirForce Institute
ofTechnology(AFIT) in navigation theory, alongwith a yearat USAF Test PilotSchool (TPS). The academic portion dealt with the incorporation of visual measurementsintoanavigationKalmanfilter. ThisworkwasperformedatAFIT.Theactual flight test took place within the TPS curriculum. The flight test (projectPEEPINGTALON [2])supportedtheAFITresearchandalsoa limitedevaluationof a camera system on the T-38 aircraft (the aircraft was specifically modified forthistest). Certaindifficultiesledtotheuseofsimulatedvisualmeasurementsratherthanmeasurementsfromthecollectedimages.1.1 Background
Thisresearchisintheareaofnavigation,whichistheartandscienceofmoving fromoneplacetoanother. Thetypeandqualityofsystem(s)useddeterminesthe accuracy with which the navigationoccurs. Many differentsystems have beenusedfornavigation. Twoofthesesystemsareconsideredmainstaysofmodernnavigation. Thesesystemsaretheinertialnavigationsystem(INS)[18]andtheGlobalPositioningSystem(GPS) [18:371]. Thisresearch investigatesthe incorporationofanewtypeofvisualmeasurementwiththeaforementionedtwonavigationsystems.TheINSandGPSsystemsaredescribedinlimiteddetailinSections1.1.1and1.1.2,whilethevisualmeasurementsandtheireffectsarediscussedinmuchgreaterdetail,astheyarethethrustofthisresearch.
1-1
8/6/2019 Image Aided Inertial Nav
18/114
1.1.1 InertialNavigation Systems. Inertial systems sense specific forceand
rotation.
These
force
and
rotation
measurements
are
combined
with
knowledge
about the Earths rotation and gravitational field and can be used to determinevehicularmovement.
ThemostimportantcomponentsinanINSareaccelerometers,gyroscopesanda computer. Theaccelerometers andgyroscopesare sensorswhich providespecificforceandangularvelocitymeasurements,respectively. Thesemeasurementsareusedbythecomputertogeneratevehicleposition,velocity,attitudeandtime(PVAT)inachosencoordinatereferenceframebyusingthenecessarynavigationequationsforthat frame. The computer is used for required computations and to track vehiclemovement. Inertialsystemshavebecomevitalaircraftsystemstiedtotheefficientmission performance of civil and military aircraft. In many cases they would beunable to perform their missions at all without an INS. These inertial navigationsystemsareveryeffectiveandoffermanyadvantages.
One of the main benefits of inertial systems is their passive nature. Theyemitnodetectablesignatureandare incapableofbeingjammed. However, inertialsystemsrequireseveralkeyitemstobeeffective. Thefirstitemisaccuratepositioninformation during alignment. Any position error during alignment will grow overtime(andthereisalwaysapositionerrorbetweenthatenteredintotheINSandtheactualposition). Thisrateofgrowthisassociatedwiththequalityofthegyroscopes.Alargealignmenterrorwithhigh-qualitygyroscopesmightactuallybebetterthana small alignment error with low-quality gyroscopes. These systems also require alengthy alignment time. If both of these requirements are not met, even the mostaccurate INScanbe, orbecome, worthless. Thequalityof theaccelerometers andgyroscopesareanothersourceoferror. Eventhehighestqualitysensorsarepronetosomeerror. Alloftheseerrorsintegrate,andanINSsystemwilldriftawayfromitsactualpositionovertime. TheINSdriftproblembroughtabouttheneedtoupdatethe INS during long flights. The updates provided by these outside, additional
1-2
8/6/2019 Image Aided Inertial Nav
19/114
measurements,areusedtoimproveINSaccuracybybounding,ordamping,theerrorgrowth
inherent
to
the
INS.
However,
some
of
these
measurements
are
cumbersome
toobtainornotalwaysavailable. Someofthesetypesofupdatemeasurementsare[6:293-294]:
1. Altitudesensors(a) BarometricAltimeter(b) RadarAltimeter
2. Forward-LookingInfrared(FLIR)3. GPS4. HeadsUpDisplay(HUD)5. Long-RangeNavigation(LORAN)6. RadioDetectionandRanging(RADAR)7. StarTrackers8. TacticalAirNavigation(TACAN)9. Overflyingaknowngeographicalpoint
Historically, new types of measurements have been integrated into the INSsolutionastheyhavebecameavailable. OneofthebestmeasurementsforaidinganINSwasprovidedbyGPS.
1.1.2 GPSSystems. TheGlobalPositioningSystem isasystemofsatellitesinmediumearthorbitthattransmitssignalstoreceiversontheEarthssurface,orflyingintheearthsatmosphere,oreveninspace. Thesereceiversarecapableofdeterminingtheirownpositionifatleastfoursatellitesarevisibletothatparticularreceiver. GPSsystemsdonotrequireanyalignmenttime(otherthanthatrequiredtolockontothesatellitesignals)sincetherearenointernalmechanicalcomponents.
1-3
8/6/2019 Image Aided Inertial Nav
20/114
GPS ishighly accurate, doesnotdrift overtime, and isalmost continuously avail-able
for
many
typical
applications.
Therefore,
GPS
is
avery
promising
new
type
of
measurementforintegrationwithanINS.Thiswasmainlyduetotheerrorcharacteristics of the GPSbeing beneficial in limiting INSweaknesses (most notably theINSerrorsgrowingordriftingovertime).
1.1.3 Integrated INS/GPSSystems. It was seen above that INS systemshavestrengthsandweaknesses,asdoGPSsystems. INSsystemsareveryaccurateovershortperiodsoftime(i.e.,theyaresensitivetohighfrequencydynamics),andtheirerrorcharacteristicsgrowslowlyovertime. The INS iscapableofgeneratingvehicle body orientation (which is very important in aircraft and submarines forwhich outside visual references may be unavailable). GPS systems are not highlyaccurate over short periods of time, but they are over long periods of time. Theyarenotcapableofgeneratingvehiclebodyorientation(unlessmultipleantennasareusedalongwithsomeadditionalprocessing). Luckily,INSweaknessesarecompensatedforbyGPSstrengths,andGPSweaknessesarecompensatedbyINSstrenths.Asaresult, thesesystemsareoften integratedtogethertotakeadvantageofthesestrengths and reduce their individual weaknesses [6:342]. This is typically accomplishedviaaKalmanfilter(seeChapter2foradiscussiononKalmanfilters). Thereare different ways to integrate INS and GPS [13], but the method used in this re-search is the only method that will be described in this thesis. The method thatis used is a tightly-coupled integration. The GPS measurement is a pseudorangemeasurementfromeachvisiblesatellite. TheINSgeneratesitsownestimateofthissamepseudorangemeasurement. Thesetwomeasurementsaredifferencedtoobtaina measurement, which isthenthe input intothe errorstate Kalmanfilter. Thismethod of integration offers two advantages. First, even a single satellite can beused to update position, velocity and attitude (PVA), rather than the four beingrequiredfortheGPSitselftodetermineitsownposition(suchaGPS-alonesolutionisrequiredinanalternativeintegrationmethodknownasloose-coupling). Second,
1-4
8/6/2019 Image Aided Inertial Nav
21/114
8/6/2019 Image Aided Inertial Nav
22/114
can negatively impact two important areas. First, lack of radar coverage over theworlds
oceans
causes
aircraft
deconfliction
to
be
implemented
by
assigning
aircraft
toflypre-determinedroutesatspecificaltitudes. Thisisperformedviavoiceordatacommunicationoverhigh-frequency(HF)radiowaves. Thereisnowaytodetermineif an airplane is where it is supposed to be, other than its own on-board systems.An aircraft could easily be misplaced on thejet route as it unknowingly reports afaultylocationtoairtrafficcontrol(ATC).Theaircraftcouldevenbecompletelyoffthejetroute. Thesecondarea impactedbyGPSsignal loss iscoastalpenetration.Aircraft
coasting
in
to
another
country
are
required
to
pass
over
certain
areas
so
theycanbecontrolledandde-conflictedwithotherairtraffic. AnaircraftusingonlyanINScouldeasilycoastinmilesfromtheintendedlocation.
1.1.5.2 HowGPS Loss Affects Tactical Airlift. Tactical airliftersdont often travel the same distances strategic airlifters do, but they suffer fromtheirownparticularnavigationproblems. AircraftliketheC-130andotherspecialoperations aircraft are often required to transit areas in a clandestine manner atnight,whileofferingminimalemissionstotheoutsideworld. Thisdifficultyissome-times compounded by having to perform this mission over featureless expanses ofwateror desert, whileretainingtheneedtoarriveataparticular locationwithouterror. INS-only systems cannot typically perform this type of mission ifthe flyingtimeapproachesevenonehour.
1.1.5.3 HowGPSLossAffectsTacticalFighters. Tacticalfightersareoftenrequiredtotakeoffatnight,flyoverenemyterritorywithoutbeingdetected,attacktargetsandreturnhomewithoutbeingshotdown. Aircraftofthistypeoftenflyunderhighg-forces,inextremeattitudes,andtheseflightconditionscanchangevery rapidly. Such an environment can make it even more difficult for an inertialsystemtooperateaccurately. PVATisextremelyimportanttothesetypesofaircraft,astheyareoftenrequiredtopassthis informationtomunitionsbeforetheycanbe
1-6
8/6/2019 Image Aided Inertial Nav
23/114
releasedfromthehostaircraft. Errorsintheaircraftsnavigationsolutionnaturallytranslate
directly
to
aweapon.
1.1.5.4 HowGPS LossAffectsDatalink Systems. Datalink is be-
comingmuchmore predominant, andthis areaofferssomeveryuniquechallenges.Aircraftarebecomingcapableoftransmittingandreceivingveryusefulinformationwhilein-flight. Thisinformationcomesinmanyforms:
1. Wingmanlocation2. Radardatatoincluderadarlockandweaponfly-outinformation3. Targetlocation4. Downedpilotlocation5. Groundthreats6. Friendlyorneutralgroundpersonnel
Someofthis informationrequiresveryaccuratepositiondata; otherwise it isnot only useless, it can lead aircrew into potentially dangerous situations or geographicalareas. Oneparticularareaofconcernisthegeographiccorrelationofdata.This is an extremely complex task, and this must often occur between dissimilarplatforms. Accuratelypassingcoordinatesbetweenaircraftisonebasisforcorrelatingtwoseparatesourcesofdataintoone. Thiscorrelationcanincreasetheaccuracyoftheinformationknownaboutthetargetaircraft(position,velocity,heading,radarparameters, etc). Acertain amount of navigation error could causethe previouslymentioned correlation to not occur. For example, a hostile contact reported byan airbornewarningandcontrolsystem(AWACS)aircraft to afighteraircraftviadatalink might not properly resolve with the fighter aircrafts own radar detectionofthesamehostilecontact ifoneoftheaircrafthasnavigationerrors. Thiswouldcause both contacts to appear on the fighter aircraft radar scope. This can causea great deal of confusion, andcausethe fighteraircraftto waste time, energy, and
1-7
8/6/2019 Image Aided Inertial Nav
24/114
evenweaponsonafalsetarget. Thisfalsetargetcouldevenpotentiallydrivetacticaldecisions.
There
are
many
other
similar
issues
related
to
the
datalink
area.
It
is
safe
tosaythatnavigationerrorscausesignificantproblemsfordatalinksystems.
1.1.6 HowGPSDependenceAffectsMunitions. ManyofthenewweaponsbeingdevelopedfortheUSAirForce(USAF)arealsoheavilyGPS-dependent. Theseweaponsfillamuch-needednichethatcannotbefilledbythehighlyaccurate laserguidedbombs(LGB)already inthe inventory. LGBsarenotall-weatherweapons.Thetargetmustbevisibletothelaserdesignator,andmustbevisibletotheweapona pre-determined time and distance from the target. This time and distance islargelydependentonthetypeofweaponandthegeometryoftheattack. ThenewGPSweaponsareall-weatherweaponsthatarenotrestrictedbycloudcover, highhumidity,orotherproblemsassociatedwithLGBs. Manyofthesenewsystemsarematedwithlow-costinertialsystems. Therefore,thesesystemsrequireeitherashortoperatingdurationtokeeptheINSerrorsfromdrifting,ortheymustbeintegratedwith GPS. The problem occurs when the flight duration is long and/or the GPSsignalislost. Insuchcases,theinertialerrorsgrowrapidlyuntilthePVAsolutionisnolongerusable. SomeofthesenewmunitionsaretheJointDirectAttackMunition(JDAM) [12], JointAir-to-SurfaceStandoffMissile(JASSM) [10]andJointStand-offWeapon(JSOW) [11]. Someoftheseweaponsareessentiallypointweapons, inthattheyneedtohitveryclosetothetargettobeeffective. A lossofGPSduringtheweaponfalltimecouldcausetheweapontobecompletelyineffectivebymissingthe target. This miss distance need not be very large for a point weapon to loseeffectiveness. Evensomeoftheareamunitionscouldpotentiallyhitfarenoughawayfromtheirintendedimpactpointtobeineffective. AnewtypeofvisualmeasurementmighthelpalleviatesomeoftheproblemsassociatedwithGPSsignallossbykeepingtheINSfromdriftingduringalongfalltime. Visualmeasurementscouldalsobeusedwith a target recognition system to refine the desired impact point as the weaponapproachesatarget.
1-8
8/6/2019 Image Aided Inertial Nav
25/114
1.2 Problem tobeSolved1.2.1 DoVisualMeasurementsEnhanceNavigationAccuracy? Ithasbeen
shown that INS/GPS systems are vulnerable to loss of the GPS signal (especiallythosesystemsoutfittedwithlow-costinertialequipment). Manyofthesesystemsareequippedwith,orcouldbeequippedwith,visualsensorsofsometype. ThisvisualequipmentcouldbeusedtoprovidemeasurementstothenavigationKalmanfiltertoupdatethenavigationsolution. Thesemeasurementscould increaseaccuracyatalltimes(withandwithoutGPSbeingavailable),butcouldproveinvaluableduringtimesofGPSsystemoutages. Asystemofthistypecouldalsobeusedtodeterminetargetcoordinatespassively. Thisresearchseekstodeterminetheanswerstothesequestions.
Specifically,simulatedvisualmeasurementswillbeincorporatedintothenavigation Kalman filter, with and without GPS, and compared to a truth source todetermine if the inclusion of the visual measurements increases navigation systemperformance. Thesesimulatedmeasurementswillbegeneratedduringaportionofthesortiewhentheaircraftisorbitingaroundaparticulartargetarea. Thisissothesametargetareawillbeavailabletogeneratemeasurementsthatdonotdisappearfromview.
1.3 ScopeThisresearchisanavigationthesisandnotasignalprocessingone. Therefore,
noeffortwasmadetoobtainvisualmeasurementsviadigitaltechniques. Therearemethodsavailable,buttheycanbedifficultandtimeconsumingtoimplement. Someofthesemethodswillbebrieflyreviewed intheLiteratureReviewsection(Section1.4). Actual visual measurements were extracted even though simulated data wasused in the analysis. A simple hand/eye technique was used to capture this data.This data is available, and it will be incorporated into the filter at a later time.
1-9
8/6/2019 Image Aided Inertial Nav
26/114
The data was post-processed, due to the increased burden imposed by real-timeprocessing.
1.4 LiteratureReviewThere is an abundance of research regarding navigation, motion estimation
using visual techniques, and a combination of the two topics. Much of the latterresearchislimitedtogroundnavigationorimageidentificationandisfundamentallydifferentthanthatproposed inthiswork. Also,muchoftheresearch is focusedonthe optical area only, and does not incorporate inertial systems, GPS or Kalmanfiltertheory. Thetermnavigation, as it isused inmanyofthesearticles, referstothe ability to move around in some autonomous fashion, but does not refer to thetypeofnavigationusedhere,i.e.,generatingPVATviaanaided-INS.
1.4.1 NavigationUsingImagingTechniques. Currenttheorybreaksmotionestimation intotwoparts [9,15],determineing inter-frame imagemotionandusingimagemotiontoestimatecameramotion. So,inter-frameimagemotionistypicallydetermined by either the optic flow constraint equation (Equation (1.1)) or sometypeofcorrespondence(matching)technique. Someestimateofmotionfromframeto frame is generated via the first step, and this information is used inthe secondsteptobackoutthemotionofthecamera. WhetherEquation(1.1)oramatchingtechniqueisused,theendprocessgeneratesmeasurementsthatcanbeincorporatedintoaKalmanfilter. Thisresearchwillimplementasimplisticmatchingtechnique.
1-10
8/6/2019 Image Aided Inertial Nav
27/114
1.4.1.1 OpticFlowConstraintEquationTechniques. Thefirsttypeofinter-frame
motion
estimation
technique
relies
upon
solving
the
optic
flow
constraint
equation,Equation(1.1)[19]:
E =E (1.1)
twhere:EisintensityEisthegradientoftheintensity isobjectvelocityOptic flow is the apparent motion of brightnesspatterns inan image [1, 19]. Thismethod generates a motion vector for each element in the image. Some of themethods reviewed produced vectors that were wildly inaccurate and unusable dueto camera shock, vibration and inherent inadequacies in the optic flow estimationtechnique. Thesecondtypeof inter-framemotionestimationtechniquesarecorrespondencetechniques.
1.4.1.2 CorrespondenceBasedTechniques. Thistechniqueisamatchingtypeofsolution,anditalsousesapparentmotionofimagebrightnesspatterns.However, motion vectors are not generated for each image element. Objects areidentifiedinoneimageandlocatedagainintheprecedingimages. Thedisplacementbetweenobjectsovertimeisusedtoestimatetheinter-framevelocity,whetheritbetranslationaland/orrotationalvelocity.
1.5 MethodologyThe original idea was to use a simple technique on a sequence of images to
generateangularinformationforincorporationintotheKalmanfilter. Theaccuracyincreaseduetothesemeasurementswastobedeterminedexperimentallyusingdataobtained during the flight test phase of project PEEPING TALON at TPS. Thiswould be accomplished by comparing system accuracy with visual measurements
1-11
8/6/2019 Image Aided Inertial Nav
28/114
8/6/2019 Image Aided Inertial Nav
29/114
Figure1.1 PEEPINGTALONT-381.7.3 T-38AAircraft. AT-38Aaircraft,tailnumber65-10325,wasmod
ified to support the PEEPING TALON test program. The test system includednavigation, video, and recording components. It consisted of a GPS-Aided Inertial Navigation Reference system (GAINR), an Ashtech GPS receiver, two model71A video cameras, a liquid crystal display (LCD) cockpit monitor, and a digitalrecordingsystem.
1.7.4 GAINR. TheGAINRsystemconsisted ofan EmbeddedGPS/INS(EGI),aPre-FlightPanel(PFP),aCockpitControlPanel(CCP),andanIntelligentFlash-memory Solid State Recorder (IFSSR). The GAINR recorded raw Inertial
1-13
8/6/2019 Image Aided Inertial Nav
30/114
Measurement Unit (IMU) data (v and ) at a 256 Hz rate and a blended EGIposition
solution
at
a64
Hz
rate
to
an
internal
data
card.
1.7.5 AshtechGPSReceiver. The Ashtech Z-Surveyor was a 12-channel,
dualfrequencyGPSreceiverwithabuilt-inbatteryandremovablePCmemorycard.ThisreceiverwascapableofprovidingrawGPSsignaldataontheinternaldatacardforpost-processing. Thesedatawerestoredata2Hzrate. AsecondAshtechGPSreceiverwasusedtocollectstationarydatafordifferentialGPSdatapurposes.
1.7.6 Camera. Both the forward and side cameras were Model 71A BallAerospacecameras. Theywereruggedized all light level, high-performance, GatedIntensifiedChargeCoupledDevicemonochromecameras. Thesecamerasuseda3rdgeneration intensifiercoupledtoa768(H)x484(V)pixel interlinechargecoupleddevice imagesensor. Theanalogvideooutputcontained525 linesatafieldrateof60Hz,aframerateof30Hz,andwith2:1positiveinterlace. Thecamerasoperatedfrom28voltsdirectcurrentwithapowerconsumptionoflessthan6watts. Cameradimensionswere5x2.5x2inches(LxHxW),andeachweighed1.1pounds.
1.7.7 Camera Lenses. The collection of lenses used had three differentfields-of-view. Thesewerestandardtypevideocameralenses,inthattheywerenotoptimized for night video even though the PEEPING TALON test collected videounderlow-lightconditionsusinglow-lightcameras. TheairbornevideorecorderwasaSonyDigitalVideoCassetteRecorder,modelGV-D300.
1.7.8 VideoTapes. The tapes used in the airborne video recorder werePanasonicmini-DVME80/120cassettes. RecordingwasperformedintheSPrecordmodeasthiswasthehighestimageresolutionmode.
1.7.9 TimeCodeGenerator/Inserter. TheGPStimecodegenerator,Datum model 9390-3000, received the GPS signal from the externally mounted GPS
1-14
8/6/2019 Image Aided Inertial Nav
31/114
antennaandconverted ittoadigital format forthevideotimecode inserter. Thevideo
time
code
inserter,
Datum
model
9559-835,
received
GPS
data
from
the
GPS
timecodegeneratorandthevideosignalfromtheselectedcamera. Thevideotimecodeinsertercombinedthesetwodatastreamstoprovideasingleoutput,whichwasavideopicturewithGPStimeinthebottomrighthandcornerofthevideoimage.ThisenabledthetestteamtodeterminethepreciseGPStimewheneach frameofvideowasrecorded.
1.7.10 VideoMonitor. The video monitor for the front seat was a 5-in.Transvideo International Rainbow II LCDMonitor. Themonitor wasmounted onthefrontcockpitglareshield,andwasrotatedrightapproximately40degrees(inaclockwisedirection). Thevideoselectorswitchlocatedinthefrontcockpitcontrolledwhichcamerawasdisplayedonthemonitorandrecordedonthevideorecorder.
1.8 SummaryThe previous was a broad overview of the work contained in the thesis ef
fort and the flight test to collect the data. Basic navigation and aiding conceptswere presented along with the problem at hand and the strategy for solving theproblem. WherethisthesisfitsintotheoverallAFITandTPSeducationprogramswasdiscussed,aswellasashort literaturereview,projectassumptions, andmate-rial/equipment intended foruse insolvingtheproblem. Thegroundwork isnow inplacetobeginwithprojectspecificsbeginningwithChapter2,BackgroundTheory.Thespecific design of thevisualmeasurementsare presented inChapter 3. Chapter4coverstheresultsobtainedduringtheresearchaswellasananalysisofthoseresults. Chapter5wrapsuptheresearchandlaysoutfollow-onworkthatcouldbeaccomplished. AppendixAcoverstheactualflighttestthatoccurredatUSAFTestPilotSchool.
1-15
8/6/2019 Image Aided Inertial Nav
32/114
2.Background
Theory
2.1 Overview
This chapter covers theory that is referenced in Chapter 3. Kalman filterbasics are covered briefly, along with geometry relating to the shape of the Earth,thearclengthequation,othernecessarygeometryequations,andadescriptionofthetechniqueusedtogeneratetheflightprofile forthenavigationorbit. ThenotationusedthroughoutthisworkthatpertainstotheKalmanfilteristakenfromthethree-course Stochastic Estimation and Control sequence at AFIT which uses the textswritten by Dr Peter Maybeck. Vectors are bolded lower-case while matrices areboldedupper-case[6,7].
2.2 KalmanFilterBasics2.2.1 KalmanFilter. AKalman filtertakesadynamicmodelofthesys
tem,measurements,deterministicinputs,andaddsinstatisticaldescriptionsoftheuncertaintiesineachoftheseandblendsthemtogetherwithreal-worlddatatogeneratethebestpossibleestimateofcertainvariablesofinterest. LinearKalmanfilterequationsareusedandpresentedbelowwithabriefexplanation. Thisresearchen-counterssomenonlinearities inthemeasurements,andthemethodofdealingwiththis problem is also presented. Refer to [6, 7, 17] for a more lengthy derivation ofKalmanfilters.
2.2.2Kalman
Filter
Equations.
The
system
model
in
this
research
is
a
linear, time-invariant, discrete time system in the form of Equation (2.1), wherex(ti) is an (n x 1) state process vector,(ti, ti1) is an (n x n) system transitionmatrix,Gd(ti1) is an (n x s) noise input matrix, andwd(ti1) is an(sx 1) whiteGaussiannoisevector[3:2-2].
x(ti) =(ti, ti1)x(ti1) +Gd(ti1)wd(ti1) (2.1)2-1
8/6/2019 Image Aided Inertial Nav
33/114
The discrete-time dynamics driving noise vectorwd(ti1) is assumed to be white,Gaussian,
with
amean
and
covariance
described
by
[3:2-3]:
E[wd(ti)]=0 (2.2)
Qd forti =tj
TE[wd(ti)wd(tj)]= (2.3) 0forti=tj
Nonlinear
measurements
are
available
and
are
modelled
by
Equation
(2.4),
where
z(ti)isan(mx1)measurementvector,h[x(ti), ti]isan(mx1)nonlinearmeasurementmodelvector,andv(ti)isan(mx1)whiteGaussianmeasurementnoisevector[3:2-6]
z(ti) =h[x(ti), ti] +v(ti) (2.4)Thediscrete-timenoisevectorv(ti)isassumedtobewhite,Gaussian,withameanandcovariancedescribedby[3:2-6]:
E[v(ti)]=0 (2.5)R(ti)forti =tj
E[v(ti)vT(tj)]= (2.6) 0forti=tj
Kalmanfilterequationscanbebrokenupintothreemajorcategories: initialcondition information, filterpropagation, and filter update. Equations for each of thesecategoriesarepresentedbelow.
2.2.2.1 KalmanFilter InitialConditionEquations. Equation (2.7)providesthebestinformationavailableabouteachofthestatesatthebeginningoffilteroperation[6:217],whileEquation(2.8)describestheinitialcovariancematrix,
2-2
8/6/2019 Image Aided Inertial Nav
34/114
whichprovidesthebest informationavailable abouttheuncertainty ineachofthestates
at
the
beginning
of
filter
operation
[6:217].
x(t0) =E{x(t0)}=x0 (2.7)
P(t0) =E{[x(t0) x0]T}=P0 (2.8)x0][x(t0)2.2.2.2 KalmanFilterPropagationEquations. Equation(2.9)isused
topropagatethestatesforward intimefromthe initialconditionsorthe lastmeasurement update, whichever is the case. This continues until a measurement isavailableforincorporationintotheKalmanfilter[3:2-4].
x(ti ) =(ti, ti1) x(t+i1) (2.9)
Equation (2.10) is used to propagate the state covariance forward in time fromthe initial conditions or the last measurement update, whichever is the case. Thiscontinuesuntila measurement isavailable for incorporation intotheKalmanfilter[3:2-4].
P(ti ) =(ti, ti1)P(t+i1)T(ti, ti1) +Gd(ti1)Qd(ti1)GTd(ti1) (2.10)
2-3
8/6/2019 Image Aided Inertial Nav
35/114
2.2.2.3 Kalman FilterUpdate Equations. The Kalman filter gainequation
(Equation
(2.11))
is
used
to
weight
ameasurement
for
incorporation
into
the
Kalmanfilter. Therelativeuncertaintybetweentheoptimalestimatesofwhatthestatesshouldbeandtheactualmeasurementsdeterminestheirrespectiveweighting[6:259].
K(ti) =P(ti )HT[ti, x(ti )]P(ti )HT[ti,x(ti )][H[ti, x(ti )]+R(ti)]1 (2.11)
wheretheH[ti,x(ti )]matrixisgeneratedby[3:2-10]:
H[ti,x(ti )]= h[x, ti] (2.12)x=x x(t
i )
The state update Equation (2.13) is used to update each of the states whenever ameasurementisavailable[3:2-10],whileEquation(2.14)isusedtoupdatethestatecovariancematrixeachtimeameasurementisavailable[3:2-10].
x(ti ), ti]] (2.13)x(t+i ) =x(ti) +K(ti)[zih[P(t+i ) =P(ti )K(ti)H[ti,x(ti )]P(ti ) (2.14)
2.2.3 ExtendedKalmanFilter. AnExtendedKalmanFilter(EKF) incorporates nonlinearities in the system model, the measurements, or both, and is anextensionoftheaforementionedKalmanfilter(hencethename). Thebasic idea isto linearize the nonlinear measurements, or dynamics models, about a redeclarednominaltrajectoryorpointatcertainintervalssolinearKalmanfilterconceptscanbeused. Thisisvalidifthenonlinearityisnottoogreatand/oriflinearizationerrorsremainsmall[7:41].
Asstatedpreviously,Kalmanfiltersworkbestwhenusingrawmeasurementsas opposed to measurements thatare pre-processed. Pre-processing measurements
2-4
8/6/2019 Image Aided Inertial Nav
36/114
createstime-correlationinthemeasurementnoises,andthismustberesolved. Oneway
to
deal
with
this
problem
is
to
incorporate
measurements
with
enough
time
lapse
betweenepochsthattheassortedmeasurementnoisesarenolongertime-correlated.Anothermethodistomodelthetime-correlatednoisewithalinearsystemdrivenbywhiteGaussiannoise[6:183]. Otherwise,oneofthebasicKalmanfilterassumptionsisviolated,i.e.,alinearsystemdrivenbywhiteGaussiannoise[6:8].
2.2.4 StateVector. AKalmanfilterisusedtoestimatevariablesofinterest.These variables are contained in the (n x 1) dimension state vector (x) where thecaret symbol is called a hat and indicates the variable under the symbol is an
estimate and not the actual value. Therefore, the state vectorx is an estimate oftheactualstatevectorx. Theninthe(nx1)referencedaboveisthenumberofstatesbeingestimated. Thestatevectormaybetheactualvariablesof interest(adirect,ortotalstatefilter). Theseactualvariablesmightbepositionandvelocityofavehicle. Foranerrorstate(or indirect)filter,theerrors intheactualvariablesofinterestareestimated. Thesemightbetheerrorinpositionandvelocityofavehicle[6:294]. Navigation Kalmanfilterstypicallyuse an errorstate implementation dueto the high dynamic rates possible in aviation systems. These high dynamic rateswouldrequireaveryhighsamplerateforadirectstatefilter. However,theerrorsinthistypeofsystemgrowslowlyandthisallowsanerrorstateimplementationandamuchlowersamplerate[6:291-297].
2-5
8/6/2019 Image Aided Inertial Nav
37/114
8/6/2019 Image Aided Inertial Nav
38/114
8/6/2019 Image Aided Inertial Nav
39/114
t+i1 ti t+i timeti1 ti
Figure2.1 KalmanFilterCyclefrequentlythanthat). Ifmeasurement incorporationonceperminuteproducesacceptablenavigationperformance,thenamuchfasterupdateratemaybeabandonedtoreduceunnecessarycomputationalloading.
Figure2.1showsatypicalKalmanfiltercycle[6:207]. Therearetwodifferenttimeepochsshown(ti1 isanarbitraryepochandti isoneepoch later). Thestateis represented at different times by the filled-in circles. The superscript - and +represent thetimejust before, andjust after, a measurement update, respectively.Thecircleatt+i1 representsthestatejustafteranupdateisperformedattimeti1.The
state
is
then
propagated
via
the
system
dynamic
equations
to
time
ti,
and
is
representedbyti indicatingthestateimmediatelypriortoupdateincorporation. Asbefore,thestateatt+i representsthestatejustafteranupdateisperformedattimeti. Thisrepresentsonlyonecycle;inrealitythisprocessisrepeatedmanytimes,buttheunderlyingconceptremainsunchanged.
2-8
8/6/2019 Image Aided Inertial Nav
40/114
2.3 ArcLengthEquationThearclengthequationisusedrepeatedlyduringthisresearch. Thisequation
statesthatthelengthofanarc(onacircle)canbefoundbymultiplyingtheradiusby the angle (in radians). The radius used here is either the transverse radius ofcurvature(Re)orthemeridianradiusofcurvature(Rn)[16:305-306].
arclength=r (2.17)where:r=radius=angle(inradians)
2.3.1 MeridianRadius ofCurvature. The meridian radius of curvature(Rn)isusedtodeterminetheapproximateradiusoftheEarthatthegivenlatitudeforalineofconstantlongitude[18:54]:
R(1
e
2
)Rn = 3 (2.18)[1e2sin2(L)]2where:R=Earthsradiusattheequatore=EarthsellipticityL=AircraftLatitude
2-9
8/6/2019 Image Aided Inertial Nav
41/114
2.3.2 TransverseRadiusofCurvature. Thetransverseradiusofcurvature(Re)
is
used
to
determine
an
equivalent
Earth
radius
at
a
given
line
of
constant
latitude[18:54]:Rcos(L)
Re = 1 (2.19)[1e2sin2(L)]2
where:R=Earthsradiusattheequatore=EarthsellipticityL=AircraftLatitude
ThedistancegeneratedbyusingRe inconjunctionwiththearclengthequationdoes not change when a change occurs in latitude. However, the conversion fromdistancetolongitudinalangle(degreesorradians)doeschangewhenachangeoccursinlatitude. Thisisduetolinesoflongitudeconvergingtogetherastheymoveawayfromtheequator. Thisexplainsthecos(L)terminthenumeratorofEquation(2.19).
2.4 GeometryBothtangentandinversetangentfunctionsareusedrepeatedlyinthisresearch.
Thepartialderivativeoftheinversetangentwillalsoberequired,andforthisreasonitwillbereviewedbelow.
2.4.1 DerivativeofInverseTangentFunction. Thepartialderivativeofaninverse tangent function is not a trivial calculation. However, it will be needed togenerate the visual measurementH matrix, and therefore it will be demonstratedbelow. The lower case variables a, b, and c are all functions of at least one ofthe states. In this formulation, the variable c represents the measurement to beincorporated,thevariablearepresentsdistance,andthevariablebrepresentsanotherdistance.
ac=tan1 (2.20)
b
2-10
8/6/2019 Image Aided Inertial Nav
42/114
Takingthetangentofbothsidesoftheequationeliminatesthearctangentfunctionand
simplifies
the
right
side
of
the
equation.
a a
tan(c) =tan tan1 = (2.21)b b
Then take the partial derivative of both sides of the equation with respect to thestates(x).
atan(c)= (2.22)
x x bExpandingthelefthandsideofEquation(2.22)removesthetangentfunctionfromthe partial derivative which simplifies the equation and sets the equation up for asubstitutioninthenextstep:
c c atan(c) =sec2(c) = (1 +tan2(c)) = (2.23)
x x x x baRecallingfromEquation(2.21)thattan(c) =b,thisvalueissubstitutedintoEqua
tion(2.23),tosimplifythelefthandsidebyfurtherremovingthetangentfunction:a2c a2c a
1 + = 1 + = (2.24)b x b2 x x b
cThen,solveforthepartialderivativeofx:
c b2 a= (2.25)
x a2 +b2 x bFinally,afterexpandingthepartialderivativeportionontherightofEquation(2.25)
candcancellingtheappropriateterms,thefinalformforx isobtained:
a b a bc b2 x (b)(a) x x (b)(a) x (2.26)= =x a2 +b2 b2 a2 +b2
2-11
8/6/2019 Image Aided Inertial Nav
43/114
2.5 NavigationOrbitGeometryDeterminationThe navigation orbit was designed so that a particular target area would be
withinthecameraFOVcontinuouslyforatenminuteperiod. Acircularorbitwasusedinanefforttomakethisaseasyaspossibleforthepilot.
2.5.1 Assumptions. Therewereseveralsimplifyingassumptionsmade:1. Nowind. Thiswassosimplegeometricequationscouldbeusedtodetermine
anapproximateairspeedandorbitsizeforthenavigationorbit.2. Aside-mountedcamerapointsouttherightsideoftheaircraft. Assuming it
pointsdowntherightwinglineisanapproximationsincethecamera-to-bodyframerotationisnotknown. Thisapproximationisaccurateenoughforflyingthenavigationorbitsincethepilotwillnotbeabletoflytheexactprofile.The first assumption caused some difficulties while the second did not. The
difficultiesencounteredwillbeaddressedlaterinthelessonslearnedsectionintheappendices.2.6 Summary
Thischaptercoveredsomebasictheorythat isneededinChapter3. Kalmanfilterbasicswerecoveredbriefly,alongwithgeometryrelatingtothearclengthequationandothernecessarygeometricrelationships. Adescriptionoftheassumptionspertainingtotheflightprofileforthenavigationorbitwasalsopresented.
2-12
8/6/2019 Image Aided Inertial Nav
44/114
3. Methodology3.1 Overview
The thrust of this research is the incorporation of a newtype of visual measurement into a navigation Kalman filter. This measurement is formulated in twoparts,eachofwhichisanangle. Themeasurementsasimplementedinthisresearchweresimulated;however,amethodforgeneratingthemeasurementsusingacamerais presented at the end of this chapter. The chapter is broken up into five majorparts: 3.2)VisualMeasurements,3.3)VisualMeasurementGeneration,3.4)VisualMeasurementErrorEstimation,3.5)VisualMeasurementGenerationusingaCam-era,and3.6EstimationofAttitudeErrorStates. Theselasttwosectionsextendtheresearch to using an actual camera, even though one was not implemented in thisthesis.
3.2 VisualMeasurementsEach visual measurement is made up of two different parts azimuth and
elevationangles. Theseanglesaremeasuredfromtheaircrafttothetargetrelativeto the North - East - Down (NED) navigation frame. The angles used here arereferenced from the Down vector and not out the nose of the aircraft (the wayazimuthandelevationaretypicallyreferredto inanAir-to-Airradar). Thismeansapointonthegrounddirectlyundertheaircraftwouldbe0degreeselevationand0 degrees azimuth. Elevation and azimuth angles increase from 0 as the point ofinterest moves away from a point directly under the aircraft. The sense for thesemeasurementsislistedinTable3.1.
The previously mentioned NED navigation frame is a local-level frame. AsdefinedfortheNEDframe,elevationisalongalineofconstantlongitudeandazimuthisalongalineofconstantlatitude. Aircraftaltitudeisknown,andtargetaltitudeisassumedknown. Thisassumptionisconsideredreasonableinlightofdigitalterrain
3-1
8/6/2019 Image Aided Inertial Nav
45/114
Table3.1 NEDFrameElevationandAzimuthSenseTarget
Direction
from
AircraftinNEDFrame Elevation Azimuth
North + n/aSouth - n/aEast n/a +West n/a -
elevationdata(DTED)beingreadilyavailable. Thisinformationisnotincorporatedintothisresearch,butthedataisavailableandcouldbeincludedatalaterdate.
3.3 VisualMeasurementGenerationThe visual measurements are formed according to the geometry between the
aircraftandthelocationoftheselectedtarget. Thevisualmeasurementsweregeneratedinthemannerdescribedbelow.
Atvisualmeasurement incorporationtime,avectorwasformedfromtheair-craftlocationtoaknowntargetlocation,expressedintheNEDframe. Thisknowntargetlocationwasusedwith,andwithout,errorsbeinginjectedinthetargetlocation(byaddingerrors intothestoredangle valuesastheyare used). Aircraftandtarget latitude, longitude and altitude were used to form this vector. No error isshown for tgtlat, tgtlon, or tgtalt. It is acknowledged that there would be errors inthese values, but this added complexity is not dealt with in this formulation. SeeChapter5forrecommendationsextendingtheresearchinthisarea.
Thedifferencebetweenthetarget latitudeandthe latitudeoftheaircraft(inradians)iscalculatedby:
lat=tgtlatacftlat lat (3.1)
where:latisthedifferencebetweenaircraftandtargetlatitude(radians)
3-2
8/6/2019 Image Aided Inertial Nav
46/114
tgtlat isthetargetlatitude(radians)acftlat istheaircraftINS-reportedlatitude(radians) latisthefilter-estimatedaircraftlatitudeerror(radians)(acftlat + lat)isthebestestimateofthetruelatitudeoftheaircraft
Next,thedistancecorrelatingtothelatfromthepreviousstepiscalculatedusingthe arc length equation, with a radius defined by Equation (2.18). This will thenbe the distance from the aircraft to the target projected along a line of constantlongitude. This projected distance is the North component of a vector from theaircrafttothetargetintheNEDframe.
eldist =Rnlat=Rn(tgtlatacftlat lat) (3.2)
The difference between the target longitude and the longitude of the aircraft (inradians)iscalculatedby:
lon=tgtlonacftlonlon (3.3)
where:lonisthedifferencebetweenaircraftandtargetlongitude(radians)tgtlon isthetargetlongitude(radians)acftlon istheaircraftINS-reportedlongitude(radians)
lonisthefilter-estimatedaircraftlongitudeerror(radians)(acftlon +lon)isthebestestimateofthetruelongitudeoftheaircraft
Finally, the distance correlating to the lon from the previous step is determinedusing the arc length equation. This is the distance from the aircraft to the targetprojected along a line of constant latitude. This projected distance is the East
3-3
8/6/2019 Image Aided Inertial Nav
47/114
componentofavectorfromtheaircrafttothetargetintheNEDframe.azdist =Relon=Re(tgtlonacftlonlon) (3.4)
Thedifferencebetweenthetargetaltitudeandthealtitudeoftheaircraftiscalculatedslightly differently in keeping with the sign convention listed in Table 3.1 for theNorthandEastdirectionsintheNEDframe. North ispositive,Eastispositive,soaccordingtotheright-handrule,Downispositive. Thiscalculationisgivenby:
altdist =acftalt +alttgtalt (3.5)
where:altdist isthedifferencebetweenaircraftandtargetaltitude(meters)acftalt istheaircraftINS-reportedaltitude(meters)altisthefilter-estimatedaircraftaltitudeerror(meters)tgtalt isthetargetaltitude(meters)(acftalt +alt)isthebestestimateofthetruealtitudeoftheaircraft
The aircraft-to-target vector in the NED frame is formed by the North, East, andDowncomponentsgeneratedinthepreviousthreesteps.
eldist
Rnlat
Rn(tgtlatacftlat lat)
n
n = azdist = Relon= Re(tgtlonacftlonlon) (3.6)
altdist altdist acftalt +alttgtalt
3-4
8/6/2019 Image Aided Inertial Nav
48/114
Finally,theNEDframevectorisusedtodeterminetheelevationandazimuthanglesfrom
the
aircraft
to
the
target
via
Equations
(3.7)
and
Equations
(3.8):
Elevation=tan1 eldist (3.7)
altdist
Azimuth=tan1 azdist (3.8)altdist
3.4 VisualMeasurementErrorEstimationDuringfilteroperation,simulatedmeasurementsandfilterpredictionsofthose
samemeasurementsaredifferencedandusedasmeasurementsinthefilter. Thefilterpredictsthemeasurementsbytakingtheequationsusedtogeneratetheestimatedmeasurementsto formtheh(x)vector. Thenonlinearh()vector isthenusedtogeneratethelinearizedH(x)matrix. Thisisdonebytakingthepartialderivativeofh()withrespecttoeachstateinthefilter.
x)Matrix. The first row of theH(3.4.1 Generating theH( x) matrixcorrespondswiththeazimuthmeasurements,whilethesecondrowcorrespondswiththeelevationmeasurements. ThefullypopulatedH(x)willlooklikeFigure3.9.
az az 0 0 0 0 0 0 0 0 0
x) = azrow= lon 0 H( alt (3.9)el row 0 el el 0 0 0 0 0 0 0 0 0
lat alt
Generationofthefirstelementoftheazimuthrow isshownbelow inSection3.4.1.1. Equation(2.25) isrepeatedhere,as it isneededtogenerateeachofthe12componentsoftheH(x)matrixfirstrow:
a a btan1b x (b)(a) x= (3.10)
x a2 +b2
3-5
8/6/2019 Image Aided Inertial Nav
49/114
3.4.1.1 AzimuthRowGeneration. TherightsideofEquation(3.11)must
be
calculated
with
respect
to
each
of
the
twelve
states
in
the
Kalman
filter
statematrix. Theendresultforazimuthwillpopulaterow1ofEquation(3.9). Theelementsthatarezeroareduetothepartialderivativesallbeingzero. This isduetotherightsideofEquation(3.11)notbeingafunctionofanyofthosestates.
tan1 azdistaltdist componentoftheH(First,the x)matrixwillbecalculated(elementlon
forrow1,column1). ThegenericequationisgivenbyEquation(3.11):
tan1 azdist azdist altdist
altdist statei(altdist)(azdist)
statei=statei az2 dist (3.11)dist +alt2
Thegenericequationbecomesmorespecificwhenthepartialderivativeistakenwithrespecttoaparticularstate:
tan1 azdist azdist (altdist)(azdist) altdist altdist lon lon
=lon az2 dist (3.12)dist +alt2
Theequation isthenbestbroken intotwoparts. Thepartialderivativeonthe leftsideofthenumeratorisshownhere:
azdist = [Re(tgtlonacftlonlon)]=Re (3.13) lon lonwhilethepartialderivativeontherightsideofthenumeratoris:
altdist = (acftalt +alttgtalt)=0 (3.14) lon lon
3-6
8/6/2019 Image Aided Inertial Nav
50/114
Theresultsofthetwopartialderivativecalculationsarethensubstitutedback intoEquation
3.12,
which
can
now
be
solved
in
this
more
simplistic
form:
tan1 azdist altdist (Re)(acftalt +alttgtalt)(Re[tgtlonacftlonlon])(0)
= lon (Re[tgtlonacftlonlon])2 + (acftalt +alttgtalt)2
(3.15)whichreducesto:
tan1 azdistaltdist Re(acftalt +alttgtalt)
= (3.16)
lon
(Re[tgtlonacftlonlon])2 + (acftalt +alttgtalt)2TheremainingthreeelementsfromEquation3.9arecalculatedinalikeman
ner,andtheresultsareshownbelow. Intermediatecalculationsareomitted,becausetheyaresimilartothe tan1 azdist examplegivenabove.lon altdist
tan1 azdist altdist Re(tgtlonacftlonlon)= (3.17)
alt (Re[tgtlonacftlonlon])2 + (acftalt +alttgtalt)2
tan1 eldistaltdist Rn(acftalt +alttgtalt)
= (3.18)lat (Rn[tgtlatacftlat lat])2 + (acftalt +alttgtalt)2
altdist Rn(tgtlatacftlat tan1 eldist lat)= (3.19)
alt (Rn[tgtlatacftlat lat])2 + (acftalt +alttgtalt)23.5 VisualMeasurementGenerationusingaCamera
Section3.5 isnotactually implemented inthisthesis. It ispresentedheretosupportfollow-onwork.
3-7
8/6/2019 Image Aided Inertial Nav
51/114
3.5.1 RequiredCamera Specifications. Certain camera specifications arerequired
to
relate
camera
frame
geometry
to
navigation
frame
geometry.
Camera
focallength(f)mustbeknown,andoneofthefollowingmustbeknowntodeterminethe width and height of the viewed image. This information is used to determinetheverticalandhorizontalpixelsize.
1. Verticalandhorizontalfield-of-view(f ovvc andf ovch respectively),wherethevandh subscriptsareforverticalandhorizontal,respectively. Thesuperscriptc indicatesthevariableisinthecameraframe.
2. Ifthefields-of-viewarenotknown,verticalandhorizontalcoverage(covvc andcovc respectively)canbeusedtogeneratetheseanglesby:h
covcc vf ovv = 2tan1 (3.20)2fcovccf ovh = 2tan1 h (3.21)2f
Then,theverticalandhorizontalpixelsizecanbecalculatedby:covcvpixelv = (3.22)
verpixelcountcovchpixelh = (3.23)
horpixelcount3.5.2 GenerateTargetLatitude/Longitude. Thecamerageometry isused
to translate target pixel location into a geometrical reference. This can then berotatedintothebodyframeandultimatelyintotheNEDnavigationframe.
Targets in an image are selected, and this selection generates pixel locationsfor eachtarget. These pixel locations are referenced from the center of the image.
3-8
8/6/2019 Image Aided Inertial Nav
52/114
Thisvectorissho
n= nn
The camera frameframe. Thiscamera
Table
Figure3.1 SignConventionVectorn isgenerated inafashionsimilartothatshow inFigure3.2,andcanthenbeformedintoa3Dvectorinthecameraframebymultiplyingbytheproperpixelsize to get a distance in the camera frame. Knowledge of the camera focal length
3-9
8/6/2019 Image Aided Inertial Nav
53/114
Figure3.2 Distancesprovidesthethirddimension. Equation(3.5.2)now formsavectortothetarget(s)inthecameraframe. Thisiscapturedbythefollowingequation:
(pixelv)(nx)
nc
x
c n = (pixelh)(ny)= nc (3.25) y f f
Cameraerrorsduetodeficiencies intheopticsmustnowberemovedsotheabovevectorpointstotheactualtargetlocationasopposedtotheskewedlocationshownonthephoto. Thecameraerrorsgeneratedduringthecameracalibrationprocedurediscussed in Section 3.5.7 are stored in an angular format, so vectornc is used togenerate theuncorrectedazimuth(azuncorr)(Equation(3.26)) andtheuncorrectedelevation (eluncorr) (Equation (3.27)). Figure 3.3 graphically demonstrates theseangles. Theyaretheazimuthandelevationangleswithopticalerrorsstillpresent.
3-10
8/6/2019 Image Aided Inertial Nav
54/114
Figure3.3 Angles
nccazuncorr =tan1 y (3.26)f
ncxel
c=tan1 (3.27)uncorr f
These angles will become azimuth in the camera frame (azc) and elevation in thecameraframe(elc)oncetheerrorsareremovedviaEquations(3.28)and(3.29). Thevalues for the optical azimuth error (opticalazerror) and the optical elevation error(opticalelerror)arepre-computedduringthecameraalignmentprocedure(seeSection3.5.7).
c caz =azuncorropticalazerror (3.28)elc =elcuncorropticalelerror (3.29)
Thecorrectedazc andelc mustbeconvertedbacktoavectorsotargetinformationcanberotatedfromthecameraframe,throughthebodyframe(viaEquation(3.31)),
3-11
8/6/2019 Image Aided Inertial Nav
55/114
tothenavigationframe(viaEquation(3.32)). (f)(tan(elc)) nc x
c y (VectorinCameraFrame) (3.30)n = (f)(tan(azc))= nc
f fThisisaccomplishedbypre-multiplyingthecameraframevectorbythecamera-to-bodydirectioncosinematrix(DCM)(Ccb)inEquation(3.31):
b cn =Ccbn (VectorinBodyFrame) (3.31)Thenthebodyframevectorispre-multipliedbythebody-to-navigationframeDCM(Cb
n)totransformthevectorintothenavigationframe:nn =Cbnnb (VectorinNavFrame) (3.32)
Theazn (Equation(3.33))andeln (Equation(3.34))anglescanthenbedeterminedfromthecomponentsofvectornn. TheseanglesareNEDframeangles:
nnazn =tan1 y (3.33)
nnz
nnxeln =tan1 (3.34)nnz
The next step is to determine target longitude and latitude. Longitude will becalculatedbyfirstdeterminingEarthradius(Re)alongalineofconstantlatitudeatthecurrentaircraftlatitude. Thenazn isusedtodeterminedistancealongalineofconstantlatitude(azdist)fromaircraftlongitudetotargetlongitude:
azdist = (altdist)tan(azn) (3.35)
3-12
8/6/2019 Image Aided Inertial Nav
56/114
Then the arc length equation is used to determine the radian change in longitudefrom
the
aircraft
to
the
target:
azdist
lon= (3.36)Re
Targetlongitudecanthenbecalculatedbytherelationship:tgtlon = lon+acftlon +lon (3.37)
Asimilarprocedure isusedtodeterminetarget latitudebyfirstdeterminingEarthradius(Rn)alonga lineofconstant longitudeatthecurrentaircraft latitude. ThisisanalogouswithRe above,exceptthechangeinlatitudeisalongalineofconstantlongitudewhichisanellipseratherthanacircle. Useeln todeterminedistancealongalineofconstantlongitude(eldist)fromaircraftlatitudetotargetlatitude:
eldist = (altdist)tan(eln) (3.38)
eldist isthencombinedwiththearclengthequationtodeterminetheradianchangeinlatitudefromtheaircrafttothetarget:
eldistlat= (3.39)
Rn
Targetlatitudecanthenbedeterminedbytherelationship:tgtlat = lat+acftlat + lat (3.40)
3.5.3 GenerateMeasurementsfromanImage. TheprocedureforgeneratingmeasurementsfromanimageisexactlythesameasinSection3.5.2,steps3.5.2through 3.5.2. The remaining steps in Section 3.5.2 need not be accomplished to
3-13
8/6/2019 Image Aided Inertial Nav
57/114
generatemeasurementsbecausetargetlatitude/longitudeisestimatedthefirsttimethe
target
was
incorporated.
3.5.4 GenerateMeasurement Estimates. The procedure for estimating
measurements is verysimilar to thatofSection3.5.2, except theprocedure is performed backwards. First, azn is determined from the estimate of target longitude
byre-arrangingEquation(3.37)togetthechangeinlongitudefromthecurrentINSlongitudetothetargetlongitude:
lon=tgtlonacftlonlon (3.41)
ThenEquation(3.36)isre-arrangedtogettheestimatedazdist:azdist =Relon (3.42)
Similarly,eln canbedeterminedfromtheestimateoftargetlatitudebyre-arrangingEquation (3.40) to getthe change in latitude fromthe current INS latitude tothetargetlatitude:
lat=tgtlatacftlat lat (3.43)ThenEquation(3.39)ismanipulatedtogettheestimatedeldist:
eldist =Rnlon (3.44)
The previously determined azdist and eldist can be combined with altitude data toformthevectortothetargetinthenavigationframe:
Rnlat nn eldist x
nn = Relon = nn = azdist (3.45) y altitude nn altdistz
3-14
8/6/2019 Image Aided Inertial Nav
58/114
TheCb DCMisusedtorotatethenavigationframevectortothebodyframe:n nb =Cbnn (3.46)n
ThentheCcb DCMisusedtorotatethebodyframevectortothecameraframe: nc =Ccbnb (3.47)
cc Determine estimates of az and el in camera frame from the components of thecameraframevectornc:
nccaz =tan1 y (3.48)
nczc nc xel =tan1 (3.49)
ncz
3.5.5 Camera-to-BodyDCMGeneration. Thecamera-to-bodyDCM(Ccb)is generatedby trackingtargets withsurveyed coordinates. A vectortothe targetisgeneratedinthecameraframeandavectortothetarget isgenerated inthenavframe. The nav frame vector is rotated into the body frame. The two vectors arenowrelatedthroughCbc.
3.5.6 VisualMeasurementRMatrixGeneration. Generating the uncertainty in a measurement is somewhat arbitrary. It should be based on sound reasoningeventhoughthefinalvaluethatworksbestintheactualfiltermaybequitedifferent.
This
value
is
typically
found
by
tuning
the
Kalman
filter.
This
is
a
processinwhichdifferentuncertaintiesinthefilterareadjustedtogeneratethebestpossible filter performance. In a perfect world the measurements would have zeroerror. However,thisisnotthecasesoerrorsthatexistmustbeapproximated. Thisissotheuncertainty inthemeasurementscanbeusedtoweightthemeasurements
3-15
8/6/2019 Image Aided Inertial Nav
59/114
properly. Thefollowingphenomenacauseerrorsinthevisualmeasurementsandwillbe
addressed
individually:
faulty
pixel
selection
and
optical
deficiencies.
Faultypixelselectionisduetotheinabilitytoselecttheexactsamepointona
targeteachtimethattargetisusedtogenerateameasurement. Anerrorofseveralpixels is realistic in such a procedure. This error will inject itself directly into themeasurements as an angle error. The vector generated in Section 3.5.2, Equation(3.24)willcontainthiserrorandinjectitdirectlyintoEquations(3.51)and(3.52).
(pixelv)(nx +nx) nxc +nc x nc = (pixelh)(ny +ny)= nc y (3.50) y +nc
f fnc +nc
c c cazuncorr =azuncorrtrue +azuncorr =tan1 ytrue y (3.51)fnc +ncxtrue xelc =elc +elc =tan1 (3.52)uncorr uncorrtrue uncorr f
Theworstcaseangleerror for faultypixelselecitonoccurswhentheselectedpixel location issupposedtobe intheexactcenterofthe imagebut isoffbysomeerroramount. Atypicalcaseisdemonstratedbelowusingcameraspecsforthecam-erainuseduringalgorithmdevelopmentatAFIT(notusedduringflighttest). Theuncorrected true azimuth (azc uncorrtrue)uncorrtrue) and uncorrected true elevation (elcareeachzerosincetheyaresupposedtobeintheimagecenter. Twopixelsoferroris assumed as a nominal case and 0.0074mm and 0.0073mm are the pixel sizes forazimuthandelevationpixels,respectively:
3-16
8/6/2019 Image Aided Inertial Nav
60/114
c c c cazuncorr = (azuncorrtrue +azuncorr)azuncorrtrue = (3.53)nc +nc nc
=tan1 ytrue y tan1 ytrue (3.54)f f
=tan1 (0.00740)+(0.00742) tan1 0.00740 = (3.55)5.2 5.2
= 0.00280= (3.56)= 0.0028radianerror (3.57)
uncorr = (elc +elc uncorrtrue = (3.58)elc uncorrtrue uncorr)elcnc +nc ncxtrue x xtrue=tan1 tan1 = (3.59)
f f=tan1 (0.00730)+(0.00732) tan1 0.00730 = (3.60)
5.2 5.2= 0.00280= (3.61)= 0.0028radianerror (3.62)
Theerrorsincurredaboveinjectdirectlyintothenexttypeoferror,whichareopticaldeficiencies. Opticaldeficienciesareduetoabnormalitiesinthecameralens,imperfections in camera component alignment, software resident in the camera aswellasotherphysicalcameraphenomenon. AcalibrationprocedureisperformedinSection3.5.7,butthiscannotremovealltheerrorspresent. This isespeciallytruewhentheanglesgeneratedintheprevioussectionhaveerrorsduetoimproperpixelselection.
3-17
8/6/2019 Image Aided Inertial Nav
61/114
8/6/2019 Image Aided Inertial Nav
62/114
cameraissetupaknowndistancefromacalibrationtarget(similartothatshowninFigure
3.4).
The
camera
must
be
carefully
aligned
so
its
focal
plane
is
parallel
to
the
targetorerrorswillbeinjectedintothefinalresultsincethiswillskewthegeometrybetweenthecameraandthetarget. Amis-alignmentofthissortessentiallychangesthe physical geometry of the setup. Since this error is not detected it will pollutetheerrorsurfacebeinggenerated. Thenapictureistakenofthecalibrationtarget.This picture is used to generate the azimuth and elevation angles to each elementin the calibration photo. These angles are then compared with the known anglesthat
are
generated
from
the
known
physical
geometry
between
the
camera
and
the
calibrationtarget. Theazerror andelerror aredefinedinEquations(3.71)and(3.72)respectively:
azerror =aztrueazmeas (3.71)elerror =eltrueelmeas (3.72)
These azimuth and elevation errors generate an error surface for the entirecamerafield-of-view. TheazimutherrorsurfacegeneratedatAFITduringdevelopment isshownbelow inFigure3.5. Thisfiguredescribestheerror inazimuthasafunctionofhorizontalandverticalposition inthe image. Thesurfaceappearsasaplanar image due to the sign convention described previously. The elevation errorsurfaceissimilarinappearance.
3-19
8/6/2019 Image Aided Inertial Nav
63/114
0 5 10 15 20 25 30 350
5
10
15
20
25
30
Horizontal Distance in inches
VerticalDistanceininches
Camera Calibration Target
Figure3.4 CameraCalibrationTarget
3-20
8/6/2019 Image Aided Inertial Nav
64/114
400
200
0
200
400
300
200
100
0
100
200
3000.04
0.03
0.02
0.01
0
0.01
0.02
0.03
0.04
0.05
Horizontal Distance (pixels)
Azimuth Error due to Camera Lens
Vertical Distance (pixels)
AzimuthError(radians)
Figure3.5 AzimuthErrorSurface
3-21
8/6/2019 Image Aided Inertial Nav
65/114
3.6 EstimationofAttitudeErrorStatesThe method for estimating the attitude error states presented below is not
actuallyimplementedinthisthesis. Itispresentedheretosupportfollow-onwork.ThissectionissimilartoSection3.4,inthattheH(x)matrixmustbederived
and calculated. However, the attitude error terms must also be populated in thismatrix,inadditiontothefourtermsalreadydescribed.
Recall that the camera frame vectornc must be rotated through two framestogeneratenn:
nc =CbnCbcnc (3.73)The camera-to-body frame DCM (Cbc) is a fixed DCM in this research. However,the body-to-navigation frame DCM (Cnb) changes every epoch and contains errorsassociatedwiththreeofthestates. ThosestatesareNorthaxistilterror(),Eastaxistilterror(),andDownaxistilterror(). Thismeansthepartialderivativeof the measurements with respect to the states will have more elements that arenon-zero. ThoseelementsareshowninEquation(3.74)withthefirstthreecolumnsbeing the same as presented previously. The incorporation of a camera into thisresearch will certainly cause filter performance to suffer somewhat. How much isto be determined. However, the errors should be small enough, at least initially,thatvisualmeasurementsstillgeneratebetterfilterperformance. Thepointwherethe tilt errors grow too large for visual measurements to be effective is also to bedetermined.
az az 0 0 0 az az az 0 0 0H( alt x) = azrow= lon 0
el row 0 el el 0 0 0 el el el 0 0 0 alt lat
(3.74)TheactualformulationofthisnewH(x)isnotderivedhere,butitshouldbesimilarinderivationtothatfromSection3.4.
3-22
8/6/2019 Image Aided Inertial Nav
66/114
3.7 SummaryThis chapter defined the visual measurements to be incorporated into the
Kalman filter and demonstrated the formulation of these angles. The method ofgeneratingvisualmeasurementerrorswascoveredindetail,aswellasthetheoryforincorporatingvisualmeasurementsfromanactualcamera. Theeffectsofthesimulatedmeasurementsasimplementedinthisthesisareanalyzedinthenextchapter.
3-23
8/6/2019 Image Aided Inertial Nav
67/114
4. ResultsandAnalysis4.1 SortieOverview
SimulatedvisualmeasurementsweregeneratedduringKalmanfilteroperation.Elevationandazimuthanglesweregeneratedata2Hzrateduringthe6-minute27-second navigation portion from time 244995 to time 245382 (GPS week seconds).ThiswasaccomplishedbyrunningtheGPSfilter(seeTable4.1foranexplanationofthedifferentfiltertypes)andgeneratingazimuthandelevationanglesateachimageincorporationtime. Thisdatawasthenstored inadatafile for incorporation intotheKalmanfilterforanalysis.
The sortie chosen for analysis in this research was a daylight flight from Ed-wards AFB to the Channel Islands (off the coast of Los Angeles) and back. Anavigationorbitwasperformedrightaftertakeoff,withtheremainderoftheflightdedicated to the camera evaluation. The ground track for the sortie is shown inFigure 4.1, with the altitude displayed in Figure 4.2. These figures are given todemonstratethesortieprofilevisually. ThefilterdatashownhereincorporatedGPSmeasurementsonly,andnobaroorvisualmeasurementswereused. Thisdatawassaved and used as a truth source. There were other truth sources available (i.e.,Ashtech-only, EGI-only, and a differential GPS solution). The Ashtech-only andEGI-onlysolutionswere lesssmooththanthefilter-generatedtruthsolution. Theirsolutions showed positionjumps on the order of several meters, and for this reason they were discarded as potential truth sources. The differential GPS solutiongeneratedbytheairborneAshtechreceiverand thegroundbasedAshtechreceiverhad data dropouts that made this solution unsuitable. The filter-generated truthdata isnot ideal,but it isappropriatetodemonstratenavigationaccuracy increaseordecreaseasGPSmeasurementsareremoved,andbaroandvisualmeasurementstaketheirplace.
4-1
8/6/2019 Image Aided Inertial Nav
68/114
120 119.5 119 118.5 118 117.533.6
33.8
34
34.2
34.4
34.6
34.8
35
35.2
35.4Position For Nav Orbit, GPS Filter
Longitude (degrees)
Latitude
(degrees)
GPS Measurements Only
NavigationOrbit
Channel Islands
2.43 2.44 2.45 2.46 2.47 2.48 2.49 2.5
x 105
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
Altitude For Entire Sortie (meters)
Time (GPS Week Seconds)
Altitude(meters)
GPS Measurements Only
Navigation Orbit
Figure4.2 AltitudeforEntireSortie4-2
8/6/2019 Image Aided Inertial Nav
69/114
Table4.1 FilterConfigurationsFilterName FilterDescriptionGPSFilter GPSmeasurementsincorporatedbefore,
during,andafterthenavigationorbitUnaidedFilter Nomeasurementsincorporated
duringthenavigationorbitBaroFilter OnlyBaromeasurementsincorporated
duringthenavigationorbitBaro/VisFilter OnlyBaroandVisualmeasurements
incorporatedduringthenavigationorbitVisFilter OnlyVisualmeasurements
incorporatedduringthenavigationorbitUnfortunately,generatingsimulatedvisualmeasurementsusingtheGPSfilter
resultedinminorerrorsinthetruthpositionduetotimetaggingerrors(theGPSmeasurementswerenotsynchronouswiththesimulatedvisualmeasurements). Thisproducedan error of less than one meter fromthe truthpositionusedto generate visualmeasurements. This is acceptable since the best performing filter underanalysishasanapproximateaverageerrorof10meters(underoptimumconditions).There are five different filter configurations that will be discussed. Each filter hasGPSmeasurements incorporatedbeforeandafterthenavigationorbit. Theydifferonlyinthetype,orlackof,measurementsthatareincorporatedduringthenavigationorbit. TheyarelistedinTable4.1,alongwithhowtheywillbereferencedfromthispointforwardinthisthesis.
Figure4.1showsthestartofnavigation intheupperrightcorneralongwiththenavigationorbit. TheChannelIslandoverflightisinthelowerleftcorner. Figure4.2showsthealtitudefortheentiresortie(inmeters). Thefirstlevel-offatapproximately3200metersisthenavigationorbit. Thefigureclearlyshowssomealtitudedeviations during the orbit. This was due to difficulties in flying a constant orbit.Thesealtitudevariationswillbeinvestigatedingreaterdetaillaterinthissection.
Figure 4.3 highlights the navigation orbit area. This orbit was flown at approximately3,300metersabovethegroundandatanairspeedof154meters/second
4-3
8/6/2019 Image Aided Inertial Nav
70/114
117.91 117.9 117.89 117.88 117.87 117.86
34.91
34.92
34.93
34.94
34.95
34.96
34.97
Position For Entire Sortie (degrees)
Longitude (degrees)
La
titude
(degrees)
FilterAshtechEGI
Aircraft Parking Spot
Navigation Orbit
Runway ApproachEnd
Runway Departure End
Figure4.3 RampAreaandNavigationOrbit(300knotsindicatedairspeed). Thisresultedinthenavigationorbitdiameterbeingroughly 5,721 meters (3.1 nautical miles). Figure 4.3 also shows the parking areafor the test aircraft, the ground taxi, both the takeoff and landing, as well as thenavigation orbit. The analysis will concentrate on the navigation orbit. Differentcombinationsofmeasurementswillbeincorporatedduringthisorbitandcomparedwitheachothertodeterminewhichfilteroperatesthebest.
TheorbitisshowninFigure4.4. ThedatawasgeneratedusingtheGPSfilterduringtheentireorbit. Thisisthetruthdata,againstwhichtheUnaidedfilter,theBarofilter,andtheBaro/Visfilternavigationorbitswillbecompared.
It is clear that the GPS filter performs much better than the Unaided filterfromacomparisonofFigures4.4and4.5,respectively. Itisalsoclearthatsomesort
4-4
8/6/2019 Image Aided Inertial Nav
71/114
117.96 117.94 117.92 117.9 117.88 117.86 117.84 117.8234.92
34.93
34.94
34.95
34.96
34.97
34.98
34.99
Longitude (degrees)
L
atitude(
degrees)
Position For Nav Orbit, No Aiding
GPS Outage, No MeasurementsTruthNo Aiding
Figure4.5 NavigationOrbitforUnaidedFilter4-5
8/6/2019 Image Aided Inertial Nav
72/114
ofaidingisrequiredduringthenavigationorbitGPSoutagetoimprovenavigationalaccuracy.
The
end
portion
of
the
Unaided
filter
orbit,
when
GPS
measurements
are
re-incorporated,iscauseforconcern. WithGPSmeasurementsagainavailable,theUnaided filter operates more poorlythan the Baro or Baro/Vis filter (not shown).Thelargerthedisparitybetweenthefilterestimatesandtheincorporatedmeasurements, the larger the filter gyrations until everything smoothes out. The UnaidedfiltershowsamuchlargerlateralfluctuationthantheBaroorBaro/Visfilters. Thereason forthis is unclearwhenonly looking atFigure 4.5, this is cause for furtherinvestigation.
4.2 GeneralFilterCommentsTheKalmanfilterusedinthisresearchisatwelve-statefilter. Therearesome
minor problems with the implementation that cause the filter to require aiding.These problems appear to reside in the vertical channel. Problems in the verticalchannelarenormalininertialnavigationsystems,butthisfilterimplementationappearstobeworsethannormal. Thesedifficultieswereallowedtoremain,asthefilterwouldbecontinuouslyaided insomefashion,andthenavigationaccuracy increaseordecreasecanstillbedeterminedwhenincorporatingvisualmeasurements. Theseproblemsareshown inFigure4.5wheretherewasnoaidingduringthenavigationorbit. TheUnaidedlateralpositionsolutionclearlydivergesfromthetrueposition.
GPSmeasurementsarere-incorporatedattheendofthe6 12-minutenavigation
orbit. The large lateral fluctuations that are visible at the end of the navigationorbitareduetothelargediscrepancybetweentheGPSmeasurementsandthefilterestimatesofposition.
4.3 UnaidedandBaroFilterCasesThe results of the filters without visual aiding (Unaided and Baro) will be
shownforcomparisonpurposeswiththeBaro/VisandVisfilters. Thethreedimen-
4-6
8/6/2019 Image Aided Inertial Nav
73/114
sional(3D)errorforeachofthesetwofilters isshown inFigures4.6. TheUnaidedfilter
is
clearly
prone
to
very
large
errors.
However,
this
is
mostly
due
to