+ All Categories
Home > Documents > 1 Covariance Estimation for GPS-LiDAR Sensor Fusion for UAVsgracegao/publications...1 Covariance...

1 Covariance Estimation for GPS-LiDAR Sensor Fusion for UAVsgracegao/publications...1 Covariance...

Date post: 03-Feb-2021
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
5
1 Covariance Estimation for GPS-LiDAR Sensor Fusion for UAVs Akshay Shetty and Grace Xingxin Gao, University of Illinois at Urbana-Champaign Akshay Shetty received the B.Tech. degree in aerospace engineering from Indian Institue of Technology, Bombay, India in 2014. He received the M.S. degree in aerospace engineering from University of Illinois at Urbana-Champaign in 2017. He worked as an Intern at NASA Ames Research Center during summer 2016 and summer 2017. He is currently pursuing the PhD degree at University of Illinois at Urbana-Champaign. His research interests include robotics and sensor fusion. Grace Xingxin Gao received the B.S. degree in mechanical engineering and the M.S. degree in electrical engineering from Tsinghua University, Beijing, China in 2001 and 2003. She received the PhD degree in electrical engineering from Stanford University in 2008. From 2008 to 2012, she was a research associate at Stanford University. Since 2012, she has been with University of Illinois at Urbana-Champaign, where she is presently an assistant professor in the Aerospace Engineering Department. Her research interests are systems, signals, control, and robotics. Abstract—Outdoor applications for small-scale Unmanned Aerial Vehicles (UAVs) commonly rely on Global Positioning System (GPS) receivers for continuous and accurate position estimates. However, in urban areas GPS satellite signals might be reflected or blocked by buildings, resulting in multipath or non-line-of-sight (NLOS) errors. In such cases, additional on- board sensors such as Light Detection and Ranging (LiDAR) are desirable. Kalman Filtering and its variations are commonly used to fuse GPS and LiDAR measurements. However, it is important, yet challenging, to accurately characterize the error covariance of the sensor measurements. In this paper, we propose a GPS-LiDAR fusion technique with a novel method for efficiently modeling the position error covari- ance based on LiDAR point clouds. We model the covariance as a function features distributed in the point cloud. We use the LiDAR point clouds in two ways: to estimate incremental motion by matching consecutive point clouds; and, to estimate global pose by matching with a 3-dimensional (3D) city model. For GPS measurements, we use the 3D city model to eliminate NLOS satellites and model the measurement covariance based on the received signal-to-noise-ratio (SNR) values. Finally, all the above measurements and error covariance matrices are input to an Unscented Kalman Filter (UKF), which estimates the globally referenced pose of the UAV. To validate our algorithm, we conduct UAV experiments in GPS-challenged urban environments on the University of Illinois at Urbana-Champaign campus. Keywords—Unmanned aerial vehicles (UAVs), light detection and ranging (LiDAR), 3-dimensional (3D) city model, global positioning system (GPS), unscented kalman filter (UKF) I. I NTRODUCTION Emerging autonomous applications in UAVs such as 3D modeling, surveying, search and rescue, and delivering pack- ages, involve flying in urban environments. To enable such autonomous control, we need a continuous and reliable source for the UAVs’ positioning. In most cases, GPS is primarily relied on for outdoor positioning. However, in an urban envi- ronment, GPS signals from the satellites are often blocked or reflected by surrounding structures, causing large errors in the position output. Different GPS techniques [1] have been demonstrated to reduce the effect of multipath and NLOS signals on positioning errors. Furthermore, there has been a considerable amount of research into reducing positioning errors with the aid of 3D city models [2]–[4]. In cases when GPS is unreliable, additional on- board sensors such as LiDAR are commonly used to obtain the navigation solution. An LiDAR provides a point cloud of the surroundings, and hence, is able to detect a large number of features in dense urban environments. Positioning based on LiDAR point clouds has been demonstrated primarily by applying different simultaneous localization and mapping (SLAM) algorithms [5], [6]. In many cases, algorithms imple- ment variants of Iterative Closest Point (ICP) [7], [8] to register new point clouds. Furthermore, there has been analysis on the error covariance of LiDAR-based position estimates. These methods have been demonstrated in simulations and practice, and include training kernels based on likelihood optimization [9], [10], obtaining the covariance based on the Fisher Infor- mation matrix [11], [12], and obtaining the covariance based on 2D scanned lines [13]. Different techniques to integrate LiDAR and GPS have been demonstrated to improve the overall navigation solution. The most straightforward way to integrate these sensors is to loosely-couple [14] them, i.e. to directly use the position output. However, in certain cases, a tightly-coupled [15]–[18] sensor fusion architecture tends to work better than a loosely- coupled one. For example, GPS position output in urban areas generally contains errors or is unavailable. However, some of the intermediate psuedorange measurements are possibly accurate and can aid in positioning [19], [20]. A. Our approach The main contribution of this paper is a GPS-LiDAR fusion technique with a novel method for efficiently modeling the er- ror covariance in LiDAR-based position measurements. Figure 1 shows the different components involved in the sensor fusion. We use the LiDAR point clouds in two ways: to estimate
Transcript
  • 1

    Covariance Estimation for GPS-LiDAR SensorFusion for UAVs

    Akshay Shetty and Grace Xingxin Gao,University of Illinois at Urbana-Champaign

    Akshay Shetty received the B.Tech. degree in aerospaceengineering from Indian Institue of Technology, Bombay, Indiain 2014. He received the M.S. degree in aerospace engineeringfrom University of Illinois at Urbana-Champaign in 2017. Heworked as an Intern at NASA Ames Research Center duringsummer 2016 and summer 2017. He is currently pursuing thePhD degree at University of Illinois at Urbana-Champaign. Hisresearch interests include robotics and sensor fusion.

    Grace Xingxin Gao received the B.S. degree in mechanicalengineering and the M.S. degree in electrical engineeringfrom Tsinghua University, Beijing, China in 2001 and 2003.She received the PhD degree in electrical engineering fromStanford University in 2008. From 2008 to 2012, she wasa research associate at Stanford University. Since 2012, shehas been with University of Illinois at Urbana-Champaign,where she is presently an assistant professor in the AerospaceEngineering Department. Her research interests are systems,signals, control, and robotics.

    Abstract—Outdoor applications for small-scale UnmannedAerial Vehicles (UAVs) commonly rely on Global PositioningSystem (GPS) receivers for continuous and accurate positionestimates. However, in urban areas GPS satellite signals mightbe reflected or blocked by buildings, resulting in multipath ornon-line-of-sight (NLOS) errors. In such cases, additional on-board sensors such as Light Detection and Ranging (LiDAR) aredesirable. Kalman Filtering and its variations are commonly usedto fuse GPS and LiDAR measurements. However, it is important,yet challenging, to accurately characterize the error covarianceof the sensor measurements.

    In this paper, we propose a GPS-LiDAR fusion technique witha novel method for efficiently modeling the position error covari-ance based on LiDAR point clouds. We model the covarianceas a function features distributed in the point cloud. We usethe LiDAR point clouds in two ways: to estimate incrementalmotion by matching consecutive point clouds; and, to estimateglobal pose by matching with a 3-dimensional (3D) city model.For GPS measurements, we use the 3D city model to eliminateNLOS satellites and model the measurement covariance basedon the received signal-to-noise-ratio (SNR) values. Finally, all theabove measurements and error covariance matrices are input toan Unscented Kalman Filter (UKF), which estimates the globallyreferenced pose of the UAV. To validate our algorithm, we conductUAV experiments in GPS-challenged urban environments on theUniversity of Illinois at Urbana-Champaign campus.

    Keywords—Unmanned aerial vehicles (UAVs), light detectionand ranging (LiDAR), 3-dimensional (3D) city model, globalpositioning system (GPS), unscented kalman filter (UKF)

    I. INTRODUCTIONEmerging autonomous applications in UAVs such as 3D

    modeling, surveying, search and rescue, and delivering pack-ages, involve flying in urban environments. To enable suchautonomous control, we need a continuous and reliable sourcefor the UAVs’ positioning. In most cases, GPS is primarilyrelied on for outdoor positioning. However, in an urban envi-ronment, GPS signals from the satellites are often blocked orreflected by surrounding structures, causing large errors in theposition output.

    Different GPS techniques [1] have been demonstrated toreduce the effect of multipath and NLOS signals on positioningerrors. Furthermore, there has been a considerable amount ofresearch into reducing positioning errors with the aid of 3D citymodels [2]–[4]. In cases when GPS is unreliable, additional on-board sensors such as LiDAR are commonly used to obtainthe navigation solution. An LiDAR provides a point cloudof the surroundings, and hence, is able to detect a largenumber of features in dense urban environments. Positioningbased on LiDAR point clouds has been demonstrated primarilyby applying different simultaneous localization and mapping(SLAM) algorithms [5], [6]. In many cases, algorithms imple-ment variants of Iterative Closest Point (ICP) [7], [8] to registernew point clouds. Furthermore, there has been analysis onthe error covariance of LiDAR-based position estimates. Thesemethods have been demonstrated in simulations and practice,and include training kernels based on likelihood optimization[9], [10], obtaining the covariance based on the Fisher Infor-mation matrix [11], [12], and obtaining the covariance basedon 2D scanned lines [13].

    Different techniques to integrate LiDAR and GPS havebeen demonstrated to improve the overall navigation solution.The most straightforward way to integrate these sensors isto loosely-couple [14] them, i.e. to directly use the positionoutput. However, in certain cases, a tightly-coupled [15]–[18]sensor fusion architecture tends to work better than a loosely-coupled one. For example, GPS position output in urban areasgenerally contains errors or is unavailable. However, someof the intermediate psuedorange measurements are possiblyaccurate and can aid in positioning [19], [20].

    A. Our approachThe main contribution of this paper is a GPS-LiDAR fusion

    technique with a novel method for efficiently modeling the er-ror covariance in LiDAR-based position measurements. Figure1 shows the different components involved in the sensor fusion.We use the LiDAR point clouds in two ways: to estimate

  • 2

    Fig. 1: Overview of sensor fusion architecture

    incremental motion by matching consecutive point clouds; and,to estimate global pose by matching with a 3D city model. Weuse ICP for matching the point clouds in both cases. For theLiDAR-based position estimates, we proceed to build the errorcovariance model depending on the surface and edge featuresin the point cloud.

    For the GPS measurement model, we use the pseudorangemeasurements from a stationary reference receiver and an on-board GPS receiver to obtain a vector of double-differencemeasurements. We use the global position estimate from theLiDAR - 3D city matching to construct LOS vectors to all thedetected satellites. We then use the 3D city model to detectNLOS satellites, and consequently refine the double-differencemeasurement vector. We create a covariance matrix for theGPS double-difference measurement vector based on SNR ofthe individual pseudorange measurements.

    Finally, we implement an UKF to integrate all LiDAR andGPS measurements, along with an on-board inertial measure-ment unit (IMU). We test the filter on an urban dataset toshow an improvement in the navigation solution. We present amore extensive version of our work in [21], including detailedanalysis and additional experimental results.

    II. LIDAR-BASED POSE ESTIMATION

    We perform the LiDAR-based pose estimation in two ways.First, we estimate incremental motion by matching consecutivepoint clouds. We use the ICP algorithm to estimate the posetransformation between the previous and the current pointclouds.

    Second, we estimate global pose by matching the pointcloud with a 3-dimensional (3D) city model. We generate our3D city model using data from two sources: Illinois GeospatialData Clearinghouse [22] and OpenStreetMap (OSM) [23].Again, we use the ICP algorithm to match the point clouds. Webegin with an initial pose guess x̂0L obtained from the positionoutput from the on-board GPS receiver and the pose estimatefrom the previous iteration. Next, we project the LiDAR pointcloud to the same space as the 3D city model and implementICP to obtain the pose transformation between them. We use

    (a) Before matching step (b) After matching step

    Fig. 2: Global pose estimation with the aid of a 3D city model.(a) shows the initial position guess x̂0L (red) and the LiDARpoint cloud projected on the same space as the 3D city model.(b) shows the updated global position x̂L (green) after the ICPstep. We observe an improvement in the global position, as theLiDAR point cloud matches with the 3D city model.

    this output to estimate the global pose x̂L. Figure 2 shows theresults of implementation of the above method.

    While navigating in urban areas, the GPS receiver positionoutput used for the initial pose guess x̂0L might contain largeerrors in certain directions. This might cause ICP to convergeto a local minimum, depending on features in the point cloud.

    III. MODELING LIDAR POSITION ERROR COVARIANCE

    We model the LiDAR position error covariance as a functionof the surrounding features, focusing primarily on surface andedge features in the point cloud. We extract these feature pointsbased on the curvature values [5].

    For the jth surface feature point, we first compute the unitnormal ûj by using 9 of the neighboring points to fit a plane.Next, we create an orthonormal basis {ûj , n̂ju, m̂ju} with thecorresponding normal. For the jth edge feature point, we firstfind the orientation of the edge êj using scans above andbelow the edge point. Again, we create an orthonormal basis{êj , n̂je, m̂je} with the corresponding edge vector. To create theerror covariance matrices for the individual feature points, weuse the basis as our eigenvectors:

    Vju =[ûj n̂ju m̂

    ju

    ](1)

    Vje =[êj n̂je m̂

    je

    ](2)

    We model the error covariance ellipsoid with the hypothesisthat each surface feature point contributes in reducing positionerror in the direction of the corresponding surface normal.For each edge feature point, we model that the position errorreduces in the directions perpendicular to the edge vector. Avertical edge, for example, would help in reducing horizontalposition error. Additionally, we assume that points closer to theLiDAR are more reliable than those further away, because ofthe density of points. Hence, we use the following eigenvaluescorresponding to the eigenvectors in (1-2):

    Lju = dju · diag(au, bu, bu) (3)

    Lje = dje · diag(ae, be, be) (4)

  • 3

    Fig. 3: Overall position error ellipsoids RL, for two pointclouds generated by the on-board LiDAR in an urban envi-ronment.

    where, au, bu, ae and be are experimentally tuned constantssuch that au � bu and ae � be; and dju and dje are distances ofthe jth surface and edge point from the LiDAR. Next, using theabove eigenvectors and eigenvalues, we construct the positionerror covariance matrix for the edge point as follows:

    Rju = Vju · Lju ·Vju

    −1(5)

    Rje = Vje · Lje ·Vje

    −1(6)

    Finally, we combine the ellipsoids from individual features toobtain the overall position error covariance:

    RL =

    nu∑j=1

    Rju−1

    +

    ne∑j=1

    Rje−1

    −1 , (7)where, Rju and R

    je are obtained in (5-6); nu and ne are

    the number of surface and edge feature points in the pointcloud. Figure 3 shows the resulting position error covarianceellipsoids for two urban environments.

    IV. GPS MEASUREMENT MODEL

    To create the GPS measurement model, we use the pseudor-ange measurements. The measurement between a GPS receiveru and the kth satellite can be modelled as [24]:

    ρku = rku + c[δtu − δtk] + Ikρu + T

    kρu + �

    kρu , (8)

    where, rku is the range between u and k; c is speed of light;δtu and δtk are the receiver and satellite clock biases; Ikρuand T kρu are atmospheric errors; and �

    kρu is the measurement

    noise. In order to eliminate certain error terms, we use double-difference pseudorange measurements [24], which are calcu-lated by differencing the pseudorange measurements betweentwo satellites and between two receivers. The double differencepseudorange measurements between two satellites k and l, and

    Fig. 4: Elimination of NLOS satellite signals. The receiverposition (black) is projected on the 3D city model. LOS vectorsare drawn to all detected satellites, and those that intersect (red)the 3D city model and are eliminated from further calculations.

    between two GPS receivers u and r, within a short baseline,can be represented as:

    ρk·lur ≈ −(1kr − 1lr) · xur + �k·lρur (9)

    where, xur is the baseline between the two receivers; and 1kris a unit vector from the receiver r to the satellite k.

    Using a 3D city model, we check if any of the satellitesdetected by the receiver are NLOS signals. We use the positionoutput generated by the LiDAR-3D city model matching, asdescribed in section II, to locate the receiver on the 3D citymodel. Next, we draw LOS vectors from the receiver to everysatellite detected by the receiver and eliminate satellites whosecorresponding LOS vectors intersect the 3D city model. Fig.4 shows the above implementation in an urban scenario.

    After eliminating the NLOS satellites, we create a vector ofdouble-difference pseudorange measurements and proceed tomodel its covariance. We assume that the individual pseudor-ange measurements are independent, and that the variance foreach measurement is a function of the corresponding SNR.To obtain the covariance matrix for the double-differencemeasurements RρDDur , we simply propagate the individualmeasurement covariance.

    V. GPS-LIDAR INTEGRATIONIn addition to using a LiDAR and a GPS receiver, we use

    an IMU on-board the UAV. The state vector consists of thefollowing states:

    xT =[pug

    T , vugT , qug

    T , bωT , ba

    T , qigT], (10)

    where, pug , vug , and q

    ug are the position, velocity and the

    orientation respectively of the UAV in the local GPS frame;bω and ba are the IMU gyroscope and accelerometer biases;qig is the orientation offset between the local GPS and IMUframes.

  • 4

    Fig. 5: Experimental setup for data collection. Our custom-made iBQR UAV mounted with a LiDAR, a GPS receiver andantennas, an IMU, and an on-board computer.

    For the prediction step of the filter, we use a constantvelocity model [25], which can be written as:

    ṗugv̇ugq̇ug

    ḃωḃaq̇ig

    =

    vug

    −RT(qug )ba−g0.5Ξ(qug )bω

    000

    + 000.5Ξ(qug )

    000

    ωm+

    0RT(qug )

    0000

    am, (11)where R(qug ) represents the rotation between the UAV frameand the local GPS frame; Ξ(qug ) expresses the time rate ofchange of (qug ) [26]. For the correction step, we use LiDAR-based pose information discussed in section II, position infor-mation from the GPS receiver and orientation information fromthe IMU. Here, we use the covariance matrices RL for LiDAR-based position updates, and RρDDur for GPS double differencemeasurements.

    We use the iBQR UAV designed and built by our researchgroup shown in Figure 5. We use a Velodyne VLP-16 Puck

    Fig. 6: Position estimates from UKF, integrating GPS andLiDAR measurements. The filter position output (blue) resem-bles the actual trajectory, more accurately than any individualsource of GPS or LiDAR measurements.

    Lite LiDAR, a ublox LEA-6T GPS receiver connected to aMaxtena antenna, and an Xsens Mti-30 IMU. We use anAscTec MasterMind as the on-board computer, to log the datafrom all these sensors. For our reference GPS receiver, weuse a Trimble NetR9 receiver within a kilometer of our datacollection sites.

    We implement the UKF on an urban dataset collected onour campus of University of Illinois at Urbana-Champaign.For our trajectory, we begin at the South-West corner of theHydrosystems Building, head North and keep moving alongthe building till we reach our starting position again. Figure 6shows the output of our filter, accurately tracking the trajectory.

    VI. CONCLUSIONThis paper proposed a GPS-LiDAR integration approach

    for estimating the navigation solution of UAVs in urbanenvironments. We used the on-board LiDAR point clouds intwo ways: to estimate the odometry by matching consecutivepoint clouds, and to estimate the global pose by matching withan external 3D city model. We used the ICP algorithm formatching two point clouds.

    We built a model for the error covariance in the LiDAR-based position estimates. We modeled the error ellipsoid asa function of surface and edge feature points detected inthe point cloud and experimentally verified the model forurban point clouds. For the GPS measurements, we usedthe individual pseudorange measurements from an on-boardreceiver and a reference receiver to create a vector of double-difference measurements. We eliminated NLOS satellites usingthe 3D city model. To construct the covariance matrix for thedouble-difference measurements, we used the SNR values forindividual pseudorange measurements.

    Finally, we implemented an UKF on an urban dataset tointegrate the measurements from LiDAR, GPS and an IMU.

    VII. ACKNOWLEDGEMENTThe authors would like to sincerely thank Kalmanje Krish-

    nakumar and his group at NASA Ames for supporting thiswork under the grant NNX17AC13G.

    REFERENCES[1] M. Irsigler and B. Eissfeller, “Comparison of multipath mitigation tech-

    niques with consideration of future signal structures,” in Proceedingsof the International Technical Meeting of the Institute of Navigation(ION-GPS/GNSS 2003), 2003, pp. 2584–2592.

    [2] S. Miura, L. T. Hsu, F. Chen, and S. Kamijo, “GPS Error CorrectionWith Pseudorange Evaluation Using Three-Dimensional Maps,” IEEETransactions on Intelligent Transportation Systems, vol. 16, no. 6, pp.3104–3115, Dec 2015.

    [3] P. D. Groves, Z. Jiang, L. Wang, and M. K. Ziebart, “Intelligenturban positioning using multi-constellation GNSS with 3D mappingand NLOS signal detection,” in Proceedings of the 25th InternationalTechnical Meeting of The Satellite Division of the Institute of Navigation(ION GNSS+ 2012), Nashville, TN, USA, 2012, pp. 458–472.

    [4] P. D. Groves, “Shadow matching: A new GNSS positioning techniquefor urban canyons,” Journal of Navigation, vol. 64, no. 03, pp. 417–430,2011.

    [5] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-time,” in Robotics: Science and Systems, vol. 2, 2014.

  • 5

    [6] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira,I. Reid, and J. J. Leonard, “Past, present, and future of simultaneouslocalization and mapping: toward the robust-perception age,” IEEETransactions on Robotics, vol. 32, no. 6, pp. 1309–1332, 2016.

    [7] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algorithm,”in Proceedings Third International Conference on 3-D Digital Imagingand Modeling, 2001, pp. 145–152.

    [8] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICPvariants on real-world data sets,” Autonomous Robots, vol. 34, no. 3,pp. 133–148, 2013.

    [9] A. Bachrach, S. Prentice, R. He, and N. Roy, “RANGE–Robust au-tonomous navigation in GPS-denied environments,” Journal of FieldRobotics, vol. 28, no. 5, pp. 644–666, 2011.

    [10] W. Vega-Brown and N. Roy, “CELLO-EM: Adaptive sensor modelswithout ground truth,” in 2013 IEEE/RSJ International Conference onIntelligent Robots and Systems, Nov 2013, pp. 1907–1914.

    [11] A. Censi, “An accurate closed-form estimate of ICP’s covariance,”in Proceedings 2007 IEEE International Conference on Robotics andAutomation, April 2007, pp. 3167–3172.

    [12] A. Censi, “On achievable accuracy for range-finder localization,” inProceedings 2007 IEEE International Conference on Robotics andAutomation, April 2007, pp. 4170–4175.

    [13] A. Soloviev, D. Bates, and F. Graas, “Tight coupling of laser scannerand inertial measurements for a fully autonomous relative navigationsolution,” Navigation, vol. 54, no. 3, pp. 189–205, 2007.

    [14] Y. Lin, J. Hyyppa, and A. Jaakkola, “Mini-UAV-Borne LIDAR for Fine-Scale Mapping,” IEEE Geoscience and Remote Sensing Letters, vol. 8,no. 3, pp. 426–430, May 2011.

    [15] M. Joerger and B. Pervan, “Measurement-level integration of carrier-phase GPS and laser-scanner for outdoor ground vehicle navigation,”Journal of Dynamic Systems, Measurement, and Control, vol. 131, no. 2,pp. 021 004–021 014, 2009.

    [16] A. Soloviev, “Tight coupling of GPS, laser scanner, and inertial mea-surements for navigation in urban environments,” in 2008 IEEE/IONPosition, Location and Navigation Symposium, May 2008, pp. 511–525.

    [17] T. Suzuki, M. Kitamura, Y. Amano, and T. Hashizume, “High-accuracyGPS and GLONASS positioning by multipath mitigation using omnidi-rectional infrared camera,” in 2011 IEEE International Conference onRobotics and Automation, May 2011, pp. 311–316.

    [18] J. i. Meguro, T. Murata, J. i. Takiguchi, Y. Amano, and T. Hashizume,“GPS Multipath Mitigation for Urban Area Using OmnidirectionalInfrared Camera,” IEEE Transactions on Intelligent TransportationSystems, vol. 10, no. 1, pp. 22–30, March 2009.

    [19] U. Iqbal, J. Georgy, W. F. Abdelfatah, M. J. Korenberg, andA. Noureldin, “Pseudoranges Error Correction in Partial GPS Outagesfor a Nonlinear Tightly Coupled Integrated System,” IEEE Transactionson Intelligent Transportation Systems, vol. 14, no. 3, pp. 1510–1525,Sept 2013.

    [20] A. Vu, A. Ramanandan, A. Chen, J. A. Farrell, and M. Barth, “Real-Time Computer Vision/DGPS-Aided Inertial Navigation System forLane-Level Vehicle Navigation,” IEEE Transactions on IntelligentTransportation Systems, vol. 13, no. 2, pp. 899–913, June 2012.

    [21] A. Shetty and G. X. Gao, “GPS-LiDAR Sensor Fusion Aided by 3DCity Models for UAVs,” IEEE Transactions on Intelligent Transporta-tion Systems, 2017, [Submitted].

    [22] “Illinois Geospatial Data Clearinghouse,” 2015. [Online]. Available:https://clearinghouse.isgs.illinois.edu/.

    [23] M. Haklay and P. Weber, “OpenStreetMap: User-Generated StreetMaps,” IEEE Pervasive Computing, vol. 7, no. 4, pp. 12–18, Oct 2008.

    [24] P. Misra and P. Enge, “Global Positioning System: Signals, Mea-surements and Performance Second Edition,” Massachusetts: Ganga-Jamuna Press, 2006.

    [25] S. M. Weiss, “Vision based navigation for micro helicopters,” Ph.D.dissertation, Citeseer, 2012.

    [26] J. Kelly and G. S. Sukhatme, “Visual-inertial sensor fusion: localiza-tion, mapping and sensor-to-sensor self-calibration,” The InternationalJournal of Robotics Research, vol. 30, no. 1, pp. 56–79, 2011.


Recommended