+ All Categories
Home > Documents > Vision-Aided Complementary Filters for Attitude and Position … · 2020. 10. 28. · Vision-Aided...

Vision-Aided Complementary Filters for Attitude and Position … · 2020. 10. 28. · Vision-Aided...

Date post: 26-Jan-2021
Category:
Upload: others
View: 2 times
Download: 0 times
Share this document with a friend
110
Vision-Aided Complementary Filters for Attitude and Position Estimation of UAVs João Pedro Dias Madeiras Thesis to obtain the Master of Science Degree in Mechanical Engineering Supervisors: Prof. Paulo Jorge Coelho Ramalho Oliveira Prof. Carlos Baptista Cardeira Examination Committee Chairperson: Prof. Carlos Frederico Neves Bettencourt da Silva Supervisor: Prof. Paulo Jorge Coelho Ramalho Oliveira Members of the Committee: Prof. João Manuel Lage de Miranda Lemos Prof. Miguel Afonso Dias de Ayala Botto June 2019
Transcript
  • Vision-Aided Complementary Filters forAttitude and Position Estimation of UAVs

    João Pedro Dias Madeiras

    Thesis to obtain the Master of Science Degree in

    Mechanical Engineering

    Supervisors: Prof. Paulo Jorge Coelho Ramalho Oliveira

    Prof. Carlos Baptista Cardeira

    Examination Committee

    Chairperson: Prof. Carlos Frederico Neves Bettencourt da Silva

    Supervisor: Prof. Paulo Jorge Coelho Ramalho Oliveira

    Members of the Committee: Prof. João Manuel Lage de Miranda Lemos

    Prof. Miguel Afonso Dias de Ayala Botto

    June 2019

  • ii

  • Acknowledgments

    Foremost, I would like to express my sincere gratitude to my supervisors Prof. Paulo Oliveira and Prof.

    Carlos Cardeira for all the continuous support, stimulating guidance, useful critics and motivation in this work

    that culminates my last five and half years at IST.

    I want to thank to IDMEC and FCT for providing financial support along this work. To the ACCAII for giving

    me access to the required space to accomplish all the important results of this work, my thanks.

    Special thanks should be given to my great friends, that I made at IST. To Luís Martins, Tiago Bacelar and

    Nuno Guerreiro for all the time spent, patience, enlightening discussions and for the support.

    I would like to extend my thanks to all my friends from CDM, who have always been a major source of

    support when things would get a bit discouraging.

    I would also like to thank my parents and brother for being so supportive throughout my time here at IST,

    always believing and inspiring me. To them, I express my deepest gratitude.

    iii

  • iv

  • Resumo

    Esta dissertação aborda o projeto de duas arquiteturas de navegação inercial (INS) e visual baseadas em

    Filtros de Kalman complementares para estimação de posição e orientação, com aplicação a quadricópteros,

    em ambientes sem acesso a GPS.

    Recorrendo a medidas inerciais, vetores de observações e posicionamento por marcas, os filtros comple-

    mentares propostos fornecem estimativas de orientação em representação de Euler e de posição relativa a um

    referencial inercial. Ambas as arquiteturas compartilham o mesmo filtro de orientação, o qual estima ângulos

    de Euler e polarizações na velocidade angular, explorando um modelo de ruído do giroscópio. Na primeira

    arquitetura estimam-se posição e polarização das velocidades. A segunda arquitetura apresenta estimativas

    de posição, velocidades e polarização da aceleração do veículo. Derivam-se propriedades de estabilidade e

    desempenho para condições operacionais recorrendo a teoria de Lyapunov e detalha-se o procedimento de

    ajuste de parâmetros dos filtros no domínio da frequência..

    Requisitos no desempenho computacional foram prioridade em ambos os sistemas de navegação e, prin-

    cipalmente no algoritmo de visão, tornando-o adequado para hardware de baixo custo disponível no mercado.

    Resultados experimentais obtidos em tempo real são apresentados e discutidos, com ambas as soluções

    propostas a bordo de um AR.Drone 2.0, usando duas diferentes plataformas de comunicação via Simulink. Os

    cálculos computacionais são providenciados por um computador externo conectado ao drone em tempo real

    na primeira plataforma de comunicação. Na segunda, as arquiteturas propostas são executadas em tempo real

    integralmente a bordo do processador do veículo.

    Palavras-chave: Sistemas de Navegação, Filtros de Kalman Complementares, Quadrirotor, Estabili-dade, Lyapunov, Sensores de Visão

    v

  • vi

  • Abstract

    This thesis presents two navigation systems based on Kalman complementary filtering for position and

    attitude estimation, with an application to Unmanned Air Vehicles (UAVs), in denied Global Positioning System

    (GPS) areas.

    Resorting to inertial measurements, vector observations and landmarks positioning, the proposed com-

    plementary filters provide attitude estimates resorting to Euler angles representation and position estimates

    relative to a fixed inertial frame. Both architectures share the same attitude filter, which estimates the Euler

    angles and rate gyro bias by exploiting a gyroscope noise model. In the first architecture, position and body

    velocity bias are estimated. The second architecture provides estimates on position, body velocities and accel-

    eration bias. Stability and performance properties for the operating conditions are derived by Lyapunov theory

    and the procedure tuning of the filters’ parameters in the frequency domain is detailed.

    Requirements on low computational burden were a priority in both the navigation system and especially in

    the vision algorithm, making it suitable for off-the-shelf hardware.

    Experimental results obtained in real time with an implementation of the proposed solutions with an AR.Drone

    2.0 using two different built-in Simulink platforms are presented and discussed. The computation is provided

    by a laptop connected with the UAV in real time for the first approach. In the second, the proposed architec-

    tures run in real time fully onboard of the vehicle’s processor.

    Keywords: Navigation Systems, Complementary Kalman Filters, UAV, Stability, Lyapunov, Vision Sen-sors

    vii

  • viii

  • Contents

    Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

    Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

    Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

    List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

    List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

    Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

    Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii

    1 Introduction 1

    1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.2 Topic Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.5 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2 State of the Art 7

    2.1 Position and Attitude filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.2 Navigation systems using cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    3 Aerial Vehicle 9

    3.1 Parrot Ar Drone 2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3.1.2 AR Drone Simulink Development-Kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    3.1.3 AR.Drone 2.0 Quadcopter Embedded Coder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.2 Nonlinear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    4 Sensor based localization relative to a landmark 13

    4.1 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    4.2 Homography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    4.3 Transformation invariance and normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4.4 Pose and position from homography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    4.5 Image distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    4.6 Markers segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    ix

  • 4.7 Point correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    4.8 Recursive point detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.9 Point rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.10 Image Resolution Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    5 Sensor modeling and calibration 23

    5.1 Accelerometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5.1.1 Sensor gain and bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5.2 Gyroscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5.2.1 Temperature model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5.2.2 Noise model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    5.3 Magnetometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    6 Estimation and Control 31

    6.1 Controllability and Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    6.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    6.2.1 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    6.2.2 Lyapunov Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    6.3 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    6.3.1 LQR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    7 Filters and Controller Design 37

    7.1 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    7.1.1 Attitude Complementary Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    7.1.2 Position Complementary Filter with velocity bias . . . . . . . . . . . . . . . . . . . . . . . . . 40

    7.1.3 Position Complementary Filter with acceleration bias . . . . . . . . . . . . . . . . . . . . . . 41

    7.2 Yaw Pre-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    7.3 Closed Loop Complementary Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    7.3.1 Attitude Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    7.3.2 Position Filter with velocity bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    7.3.3 Position Filter with acceleration bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    7.4 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    7.4.1 Outer Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    7.4.2 Inner Loop - Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    8 Navigation System Implementation 51

    8.1 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    8.1.1 Camera tracking system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    8.1.2 Euler angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    8.2 Filter coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    8.2.1 Architecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    x

  • 8.2.2 Architecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    8.3 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    8.3.1 Outer Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    8.3.2 Inner Loop - Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    9 Experimental Results 57

    9.1 Camera calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    9.2 Complementary Filters - DevKit Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    9.2.1 DevKit - Architecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    9.2.2 DevKit - Architecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    9.3 Complementary Filters - Target Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    9.3.1 Target - Architecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    9.3.2 Target - Architecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    9.3.3 Architectures Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    10 Conclusions 81

    10.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    Bibliography 83

    xi

  • xii

  • List of Tables

    5.1 Gyroscope AV parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    8.1 LQR controller parameters for the outter loop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    8.2 Physical parameters for the Feedback Linearization Control . . . . . . . . . . . . . . . . . . . . . . . 55

    8.3 LQR inner loop controller parameters - Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    9.1 Position filter parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    9.2 Complementary Filter Parameters in Architecture 1 DevKit . . . . . . . . . . . . . . . . . . . . . . . 61

    9.3 Experimental RMSE, observation available - Arch. 1 DevKit . . . . . . . . . . . . . . . . . . . . . . . 64

    9.4 Complementary Filters Parameters in Architecture 2 DevKit . . . . . . . . . . . . . . . . . . . . . . . 65

    9.5 Experimental RMSE, observation available - Arch. 2 DevKit . . . . . . . . . . . . . . . . . . . . . . . 68

    9.6 Complementary Filter Parameters in Architecture 1 - Target. . . . . . . . . . . . . . . . . . . . . . . 72

    9.7 Experimental RMSE, observation available [20-145]sec - Arch. 1 Target . . . . . . . . . . . . . . . . 72

    9.8 Complementary Filters Parameters in Architecture 2 Target . . . . . . . . . . . . . . . . . . . . . . . 75

    9.9 Experimental RMSE, observation available [10-145]sec - Arch. 2 Target . . . . . . . . . . . . . . . . 76

    xiii

  • xiv

  • List of Figures

    3.1 DevKit network architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.2 Target network architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3.3 UAV Body and Inertial frames adopted. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    4.1 Graphical representation of the environment under study and according to equation (4.1). . . . . 13

    4.2 Marker distinction procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    5.1 Raw measurement gyroscope data versus temperature and the respective fitted polynomial of 5th

    order (left). Raw measurement over time and the corrected gyroscope data(right). . . . . . . . . . 25

    5.2 Different Noise’s Processes on an Allan deviation plot [37]. . . . . . . . . . . . . . . . . . . . . . . . . 26

    5.3 Ar Drone 2.0 gyroscope Allan deviation for 80min data. . . . . . . . . . . . . . . . . . . . . . . . . . 28

    6.1 Kalman Filter block diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    6.2 Linear Quadratic Regulator block diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    7.1 Attitude filter block diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    7.2 Position filter with velocity bias. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    7.3 Position filter with acceleration bias. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    7.4 Position control block diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    8.1 Vision algorithm flowcharts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    8.2 Global filter architecture 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    8.3 Global filter architecture 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    9.1 Dsiposal of the marker at the lab where the experience took place and its segmentation resorting

    to the developed vision algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

    9.2 Inertial Position computed using an Ar Drone 2.0 camera versus ground truth. . . . . . . . . . . . 59

    9.3 Orientation computed using Ar Drone 2.0 camera versus ground truth. . . . . . . . . . . . . . . . . 59

    9.4 Parrot Ar Drone 2.0 with camera and Inertial frame. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    9.5 Attitude filter performance according to flight results. . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    9.6 Complementary filter transfer functions - A1 DevKit . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    9.7 Trajectory estimation results in X and Y axis - Arch. 1 DevKit . . . . . . . . . . . . . . . . . . . . . . 63

    9.8 Body linear velocities and respective bias estimation - Arch. 1 DevKit . . . . . . . . . . . . . . . . . 63

    xv

  • 9.9 Attitude estimation results - Arch. 1 DevKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    9.10 Body angular velocities and respective bias estimation - Arch. 1 DevKit . . . . . . . . . . . . . . . . 65

    9.11 Complementary filter transfer functions - A2 DevKit . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    9.12 Trajectory estimation results in X and Y axis - Arch. 2 DevKit . . . . . . . . . . . . . . . . . . . . . . 67

    9.13 Body linear velocities and respective errors - Arch. 2 DevKit . . . . . . . . . . . . . . . . . . . . . . . 68

    9.14 Body linear and centripetal acceleration biases (left) and estimate of the body normalized gravi-

    tational vector (right) - Arch. 2 DevKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    9.15 Body angular velocities and respective bias estimation - Arch. 2 DevKit . . . . . . . . . . . . . . . . 69

    9.16 Attitude estimation results - Arch. 2 DevKit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    9.17 Trajectory estimation results in X and Y axis - Arch. 1 Target . . . . . . . . . . . . . . . . . . . . . . . 73

    9.18 Body linear velocities and respective bias estimation - Arch. 1 Target . . . . . . . . . . . . . . . . . 73

    9.19 Yaw estimation from the Pre-filter using the camera and magnetometer yaw - Arch. 1 Target . . . 74

    9.20 Attitude estimation results - Arch. 1 Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    9.21 Body angular velocities and respective bias estimation - Arch. 1 Target . . . . . . . . . . . . . . . . 75

    9.22 Trajectory estimation results in X and Y axis - Arch. 2 Target . . . . . . . . . . . . . . . . . . . . . . . 77

    9.23 Body linear velocities - Arch. 2 Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    9.24 Body linear and centripetal acceleration biases (left) and estimate of the body normalized gravi-

    tational vector (right) - Arch. 2 Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    9.25 Yaw estimate from the Pre-filter using the camera and magnetometer yaw - Arch. 2 Target . . . . . 78

    9.26 Attitude estimation results - Arch. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

    9.27 Body angular velocities and respective bias estimate - Arch. 2 Target . . . . . . . . . . . . . . . . . . 80

    xvi

  • Nomenclature

    Cartesian Coordinate Frames

    {B} Body frame.

    {C } Camera frame.

    {I } Inertial frame.

    {L} Landmarks frame.

    Greek symbols

    Λ Decouplig matrix.

    λ Euler angles.

    ω Angular velocity of {B} with respect to {I} described in {B}.

    τ Moments applied in {B}.

    ∆t Sample time.

    δAD Percentage error of the estimated Allan Deviation.

    γ Mahlanobis distance threshold.

    Ω Allan Variance output rate.

    φ Roll angle.

    ψ Yaw angle.

    ρ Finite positive constant.

    σ2 Variance.

    τ Averaging time.

    θ Pitch angle.

    θav Allan Variance output angle.

    ζ,Θ Gaussian zero-mean white noise.

    xvii

  • y System output.

    Roman symbols

    µ Average vector of Cb and Cr .

    a Acelleration.

    B Control-input matrix.

    b Bias.

    Cb Blue-difference chroma vector.

    Cr Red-difference chroma vector.

    C Covariance matrix.

    c Parameters of gyroscope temperature model.

    cd Diagonal matrix of drag coefficients.

    CM Aerodynamical coeficient.

    Ctr b Controllability matrix.

    D Mahalanobis distance matrix of an image.

    e Observation noise.

    F State-transition matrix.

    f (xi n) Control independent vector field.

    Fd State-transition matrix.

    G Kalman Process noise input matrix.

    g Gravity vector in {I}.

    g (xi n) Control independent vector field.

    H Observation matrix.

    h(xi n) Output vector field.

    Hc Homography.

    I Identity matrix ∈R3.

    K Kalman gain matrix.

    K l qr LQR gain matrix.

    Kc Intrisic camera matrix.

    xviii

  • m Earth’s magnetic field.

    O Observability matrix.

    P Kalman Error covariance matrix.

    p Position vector described in {I}.

    PL LQR covariance matrix.

    Q(λ) Transformation from angular rate to Euler angle rate.

    Q Kalman Process noise matrix.

    QL LQR State-weighting matrix.

    R Kalman Observation noise matrix.

    rd =[

    k1 k2]

    Radial distortion coefficients of the camera.

    RL LQR Control-weighting matrix.

    S Calibration matrix of scale factors.

    T Vector of the correlation times of the bias instability noise.

    T l y Lyapunov tranformation matrix.

    u Control input.

    v Linear velocities.

    Wc World point described in {C}.

    X State variables of a linear system.

    x State variales.

    xc World points described in image coordinates.

    Y Luma component.

    z State variables.

    ŝ Estimate of vector s.

    I Moments of Inertia matrix.

    I Moment of Inertia.

    s̃ Estimation error of vector s.

    r̃pnp Reprojection error.

    B Bias Instability noise coeficient.

    xix

  • c Cossine short-hand.

    K Rate Random Walk noise coeficient.

    m Mass.

    Na Angle Random Walk noise coeficient.

    ox ,oy Camrera principal point.

    Q Quantisation noise coeficient.

    R Rate Ramp noise coeficient.

    s Sine short-hand.

    T Thrust.

    t Tangent short-hand.

    u, v World points described in pixel coordinates.

    w,n Gaussian zero-mean white noise.

    xd , yx Distorted image points.

    R IB ,RI Rotation matrix from {B} to {I} with Z-Y-X Euler convenction.

    z−1 Time delay.

    L Distance from the UAV center of mass to the center of the rotor.

    Subscripts

    ω Angular velocity.

    a Acceleration.

    b Bias.

    p Position.

    v Velocity.

    Ω,av Allan Variance.

    φ,θ,ψ Euler components.

    a Accelerometer.

    ar w,r r, q,r r w,bi Allan Variances noises.

    c Calibrated.

    cam Camera algorithm.

    xx

  • hi Hard iron.

    i n Inner Loop.

    k Discrete-time instant tk , k ∈Z.

    mag Magnetometer.

    opt Optimal.

    r Raw.

    r ot Rotor.

    si ith element of vector s.

    sx , sy , sz x, y and z axis components of the 3 x 1 vector s.

    si Soft iron.

    x, y, z Cartesian components.

    Superscripts

    A s Vector s represented in coordinate frame {A}.

    Mathematical Operators

    (s)× Cross-product matrix of the 3 x 1 vector s, defined as

    0 −sz sysz 0 −sx−sy sx 0

    .

    ST Transpose S.

    S−1 Inverse of S.

    ṡ First derivative of s, i.e.(

    ds

    dt

    ).

    diag(S) Diagonal of matrix S.

    s̈ Second derivative of s, i.e. .

    xxi

  • xxii

  • Acronyms

    3D Three Dimensional. 1, 8, 14, 17, 19

    A/D Analog-to-Digital. 24

    AD Allan Deviation. 26, 27

    ARW Angle Random Walk. 26–28

    ASC Autonomous Surface Craft. 1

    AV Allan Variance. xiii, 25, 26, 28, 61

    BI Bias Instability. 26–28

    DevKit Simulink Developement Kit. xi, xiii, xv, xvi, 10, 47, 51, 52, 54, 60–72, 75, 80

    DLT Direct Linear Transform. 15, 16, 19

    DSP Digital Signal Processor. 9, 10

    EKF Extended Kalman Filter. 7, 8

    EPnP Efficient Perspective and Point. 16

    GPS Global Posiotining System. 1–3, 7, 8, 63, 81

    IMU Inertial Measurement Unit. 2, 4, 7, 9, 10, 25, 81

    INS Inertial Navigation System. 2, 5, 7, 8, 62, 63, 81

    KF Kalman Filter. 7

    LQR Linear Quadratic Regulator. x, 34, 47, 48

    LTI Linear time-invariant. 3, 4, 31, 34, 45, 46, 48, 60, 65

    LTV Linear time-variant. 3, 46, 65, 81

    MEMS Micro-Electro-Mechanical System. 2, 23–25, 27

    xxiii

  • PF Particle Filter. 7

    PNP Perspective and Point. 16, 20, 57

    QN Quantisation noise. 26

    RPnP Robust Perspective and Point. 16, 19, 57

    RR Rate Ramp. 26–28

    RRW Rate Random Walk. 26, 27

    SVD Singular Value Decomposition. 15, 17

    TCP Transmission Control Protocol. 10

    UAV Autonomous Air Vehicle. xv, 1–4, 7–13, 39–41, 47–49, 58, 60–62, 66, 69, 71, 72, 81, 82

    UDP User Datagram Protocol. 10, 60, 71

    UKF Unscented Kalman Filter. 7

    xxiv

  • Chapter 1

    Introduction

    In recent years, the technological development around autonomous vehicles has been on the rise, such as

    Autonomous Surface Crafts (ASCs) and Unmanned Air Vehicles (UAVs). There are many applications of this

    technology, from to personal entertainment to coastal surveillance operations, rescue missions, monitoring

    and security, for more in depth discussion see [1]. Nowadays, autonomous vehicles are more cost effective,

    productive and a better solution to the many applications considering how complex, pricy and sometimes

    even impossible to accomplish by other means.

    Compared with terrestrial vehicles, such as cars, UAVs are able to move in the three-dimensional space. A

    popular type of UAV is quadrotors. They feature high manoeuvrability and agility, which is ideal in a multiple-

    obstacle environment, and can be made strong enough to pick objects up and provide deliveries.

    The practical applications often demand a stable and reliable localization and navigation systems, where

    the first provides information about position, velocity and orientation and the second uses the estimates to

    control the vehicle. Both systems are crucial. If any of those fail, the vehicle’s stability can be jeopardized re-

    sulting, many times, in material damage or loss and, sometimes, even human life. To tackle this problem, these

    systems have redundant onboard sensors in case one of the systems fail, like on airplanes. In regular UAVs, low-

    cost sensors are present, which both in short and long term non ideal features such as strong non-linearities,

    bias and noise that degrade the measurements. Instead by using the data directly from sensors in the naviga-

    tion system, which is unthinkable for such type of sensors, a combination of different sensors in the correct

    way is a prerequisite for a better navigation system, tasks denominated as sensor fusion. The information that

    results from sensor fusion is better than when considering the sensors individually. For instance due to the

    physical connection between some state variations, for instance, the Euler angles and the angular velocity ob-

    tained from the accelerometer and gyroscope, respectively, sensor fusion is possible with sensors that measure

    different states.

    When no Global Positioning System (GPS) are available (e.g. indoors, underground, underwater, in space,

    etc.) Visual-Aided Inertial Navigation Systems can provide precise state estimates for the 3D motion of a vehi-

    cle. Relying only on the velocity estimate given by the optical flow and in the accelerometer data to compute

    position is far from perfect over a long run as it produces a drift in position over time due to bias and noise

    integration. At some point, it is essential to have information about the surrounding environment such as a

    1

  • known object in space to compute position and even attitude without drift. Furthermore, high precision GPS

    receivers are expensive, and for specific applications, there is still not enough accuracy (e.g., dropping an ob-

    ject in a precise location) and vision sensors can improve the precision. Besides, cameras are small, lightweight

    and provide valuable and low-cost information on the surrounding environment [2]. Similarly to GPS, cameras

    are complementary in frequency content, relative to IMUs.

    1.1 Motivation

    The operation of UAVs in unstructured environments with reduced navigation aids, e.g. without access

    to global position systems, is a challenge in autonomous robotics. More complexity emerges when unstable

    platforms are involved, as is the case of UAVs. Nowadays, modern UAVs aim at higher levels of autonomy and

    to performance flight stabilization. Precision at a lower cost is essential than ever and still an ambitious goal.

    In regular UAVs, Micro Electro Mechanical Systems (MEMS) are employed, which are low-cost and low-power

    consumption sensors, for the Inertial Navigation Systems (INS). Compared with high-end sensors like active

    ring-laser and interferometric fibre-optic sensor [3], MEMS suffer from strong non-linearities such as bias and

    noise that degrade the accuracy of the estimates.

    Computer vision algorithms and visual sensors perform a crucial role as the central solution in indoor and

    outdoor scenarios due to their ability to provide comprehensive information regarding the surrounding envi-

    ronments. Although, the accuracy of these methods rely on several factors, such as image resolution, frame

    rate, viewing angle, illumination, various aerial image features and reference data. Furthermore, image algo-

    rithms can require high processing power, which is a problem for low-cost hardware.

    With this regard, this work proposes to solve the real-time problem of estimation and navigation of a UAVs

    with only the help of onboard vision and inertial sensors resources through blending efficient vision algorithms

    with optimal filters.

    1.2 Topic Overview

    For the purpose of controlling a system, whether linear or nonlinear, a minimum number of states has to

    be known. Usually, information on this states are not attainable directly from the onboard sensors. For better

    understanding, consider the state space realization of a linearized version of a quadcopter attitude around the

    Euler anglesλ= [0 0 0]T ,

    ψ̇

    θ̇

    φ̇

    ω̇x

    ω̇y

    ω̇z

    =

    0 0 0 0 0 1

    0 0 0 0 1 0

    0 0 0 1 0 0

    0 0 0 0 0 0

    0 0 0 0 0 0

    0 0 0 0 0 0

    ψ

    θ

    φ

    ωx

    ωy

    ωz

    +

    0

    0

    0τφ

    Ixτθ

    Iyτψ

    Iz

    (1.1)

    2

  • where λ = [ψθ φ]T are the Euler angles, yaw, pitch and roll. ωi are the angular velocities, τi the applied mo-ments and Ii the moments of inertia of the vehicle

    In order to control the dynamic system (1.1) it is desirable to have access to both the Euler angles λ and

    angular velocities ω states. To get those states two sensors are normally used, e.g. one triad accelerometers

    and a triad of rate-gyros. The gyroscope measures directly the angular velocities ω while an accelerometer

    measures the body accelerations. If the body is at rest, the vehicle is only subjected to the Earth gravitational

    force, andλ can be retrieved. Due to vehicle motion, this is not always true, and other accelerations, centripetal

    and linear, appear mixed in the sensor measurement. Consequently, the λ computation directly from the

    accelerometer data is unreliable. Wherefore, a fusion of the gyroscope and accelerometer data is inevitable as

    λ can be retrieved by gyroscope integration for smallλ values.

    The same holds for the position dynamics of the vehicle. Most of the times the measurements are noisy and

    inaccurate. However, with the understanding of the underlying kinematics and dynamics that interconnects

    those states, more accurate and less prone to noise estimates are achievable.

    The vehicle kinematics and dynamics are highly non-linear, so approximations for instance, the one present

    in (1.1) are not optimal and may lead to poor estimations. To tackle the optimality and non-linearities of

    the system, a technique from Batista et all [4] based on Lyapunov transformations is used to transform non-

    linear/linear time-variant systems into linear time-invariant (LTI) ones, thus ensuring that the properties of

    observability present in the latter are as well in the former system. The same holds for uniform asymptotic

    stability properties, although the property of optimality in the classic Kalman filter is only valid if the trans-

    formed system is linear time-variant (LTV). With that in mind, the explicit optimal gains for the LTV system are

    retrieved by reversing the previous coordinate transformation. So in case of a non-linear system, the gains can

    only be said optimal if the variations from step k to step k +1 are virtually null.

    1.3 Objectives

    The goal of this thesis is the study and development of navigation systems in GPS-denied environments

    based on efficient vision algorithms and complementary Kalman filtering for positioning and attitude esti-

    mates, with an application on UAVs, and proved stability by Lyapunov means. More specifically to fuse on-

    board inertial and image sensors to provide real-time position and attitude estimates for a low-cost quadrotor

    in a known surrounding environment. In the vision algorithm, landmarks are to be tracked to retrieve inertial

    position and orientation in the three dimensional world, recurring to a monocular camera. Different architec-

    tures are to be tested experimentally on board of a low-cost Parrot Ar Drone 2.0.

    1.4 Contributions

    This thesis has the following contributions: i) An efficient vision algorithm is proposed to run onboard

    of an Ar Drone 2.0 to solve the problems of pose estimation in GPS denied conditions, ii) Two coupled com-

    plementary filters solutions are proposed to estimate position and attitude with only the help of the onboard

    sensors and remaining hardware for processing, and iii) All filters are proved to be uniformly asymptotically

    3

  • stable, and the Kalman optimal gains are given by an LTI system despite the non-linear/linear time-variant of

    the employed filter, due to Lyapunov transformations properties.

    The work produced along this thesis resulted in participation on a robotics competition and on the accep-

    tance of one scientific paper:

    − Madeiras, J., Cardeira, C. and Oliveira, P. (2019). Vision-Aided Complementary Filters for Attitude and Po-sition Estimation: Design, Analysis and Experimental Validation. Accepted on the 21st IFAC Symposium

    on Automatic Control in Aerospace, Cranfield, UK, August 2019

    − Madeiras, J., Martins, L., Cardeira, C. and Oliveira, P. (2019). Autonomous UAV Race: An onboard vision-based solution. ’FreeBots’ competition at Festival Nacional de Robótica 2019 in Gondomar, April 2019

    1.5 Thesis Outline

    The contributions and work proposed are detailed within the organization of this dissertation as follows:

    • Chapter 2: State of the Art

    − A literature review of the different type of position and attitude filters along with systems wherecameras are used, is presented in this chapter.

    • Chapter 3: Aerial Vehicle

    − Details of the vehicle, model and communications exploited are discussed.

    • Chapter 4: Vision-based Localization Relative to Landmarks

    − This chapter presents the landmark-based pose estimation algorithms applied, from the solutionof the perspective and point problem to the markers segmentation algorithm.

    • Chapter 5: Sensor Modeling and Calibration

    − All the calibrations procedure for the IMU sensors are explained and a gyroscope noise model isderived based on Allan Variance to be afterwards applied in the attitude filter design.

    • Chapter 6: Estimation and Control Theory

    − Theoretical background of the estimation and control approaches considered is presented.

    • Chapter 7: Filters and Controller Design

    − The design of the filters and controllers proposed is shown. Stability and performance propertiesare proven based on Kalman and Lyapunov theory.

    • Chapter 8: Navigation System Implementation

    − The overall inertial navigation systems exploited is presented by coupling the designed filters, visionalgorithms and the onboard sensors. Two different architectures are designed.

    4

  • • Chapter 9: Experimental Results

    − In this chapter, experimental results of the two INS architectures onboard of an Ar Drone 2.0 areshown. The INS architectures are tested with two different sets of communications, the first running

    in a laptop connected to the vehicle and the second running fully onboard.

    • Chapter 10: Conclusions

    − Concluding remarks of the primary outcomes of this work, and a discussion of directions for futurework are presented.

    5

  • 6

  • Chapter 2

    State of the Art

    In recent years, the quick advancement in UAVs and the enhancements on low-cost sensors, have inspired

    the development of new estimators. In this chapter, a literature review of the most prominent filters and INS

    are presented.

    2.1 Position and Attitude filters

    The choice of the type of filter in INS ranges from traditional methodologies to recently proposed ap-

    proaches. In the classical approach, KF is applied in real-time applications to fuse data from different sensors

    in an optimal way for linearized systems. The idea is to get independent and redundant information about nav-

    igation states, like position, attitude and velocity, and fuse them, with the requisite of having prior information

    about the covariance values of both INS and position sensor as well statistical properties of each sensor system,

    see [5]. Extended Kalman Filter (EKF) is a nonlinear filtering technique where a nonlinear model is considered,

    and a linearization occurs each time to get Kalman gains. However, the linearization implicit in EKF leads to

    performance degradation or even filter divergence if the assumption of local linearity is violated [6]. In [7], sev-

    eral other alternative techniques are introduced, namely Unscented Kalman Filters (UKF), Particle Filters (PF),

    Adaptive Methods and Nonlinear Observers [8], all with no stability guarantees.

    Despite all the mentioned methods, the INSs in this thesis are designed to be easily implemented in a low-

    cost, low-power consumption embedded hardware architecture. Therefore, high computational cost filters like

    EKF, UKF and PF with no stability guarantees were out of the equation. The complementary Kalman filters pro-

    posed in this work are time-varying, although the gains are computed offline using an auxiliary time-invariant

    design system that through a Lyapunov transformation turns into the proposed filter [9].

    Many works have been done on those type of filters. In [4], an optimal time-varying position, velocity

    and body gravitational acceleration kinematic filter are proposed with proved globally asymptotic stability. An

    attitude and position coupled system is proposed and tested onboard of a catamaran [9] using an IMU and GPS

    as sensors. The architecture can estimate Euler angles, gyroscope bias, position and velocity. Their stability and

    performance properties were derived theoretically for the uncoupled structure.

    7

  • 2.2 Navigation systems using cameras

    The full state estimation problem, that is required to provide estimates for the control and mission systems,

    requires the use of a minimum set of measurements. The non-ideal characteristics of the inertial sensors,

    namely biases and noise, renders inaccurate estimates. A case example is when in integrating velocity. If an

    unbiased position measurement is not known the system is said to be not observable due to unbounded errors.

    Because of their complementary nature, GPS and INS are often the preferred core sensor suite to tackle the ob-

    servability issue. Nonetheless, the integration of other sensor combinations, such as GPS with vision-aided

    ([10], [11],[12]) and INS with computer vision [13], are also suitable for UAVs. Especially, when it comes to GPS-

    denied environments, such as indoors or in low signals areas, alternative aiding sensors are required. Factors,

    as such the trade-off between price and accuracy of INS sensors and the suitableness of GPS to spoofing and

    jam, have also contributed to expanded interest in different sensor combinations [14]. Additionally, measure-

    ments from cameras are not subject to electromagnetic interference and do not rely on weak radio signals as is

    the case of GPS [15].

    The literature based on vision aided systems for rigid body stabilization and estimation indicates possi-

    ble solutions [16]: i) keeping a known target visible along the system trajectories, ii) minimizing the required

    knowledge about the observed object, iii) guaranteeing convergence even in the presence of image noise and

    uncertainties in the camera model and iv) proving observability condition for attitude estimation [17]. The

    work in [18] proposes an EKF vision-aided navigation system that detects and tracks 3D landmarks to estimate

    pose and velocity even if GPS is lost. Experimental results were carried out offline with flight data from a UAV

    at the lab. An onboard EKF homography-based vision-aided inertial navigation system to provide drift-free

    velocity and attitude estimation is found in [18] using a UAV for outdoor experimental tests. A comparison of a

    variety of algebraic and iterative methods based on known point/landmarks can be seen in [19].

    8

  • Chapter 3

    Aerial Vehicle

    The vehicle under study is a quadrotor. This chapter will present the dynamics and kinematics of these

    vehicles as well as the hardware and communications available on-board of the vehicle understudy.

    3.1 Parrot Ar Drone 2.0

    The Parrot AR.Drone 2.0 is a low-cost quadcopter commercially available. It can be controlled with a smart-

    phone application provided by Parrot. Parrot has also released a Software Developing Kit (SDK). This SDK en-

    ables developers to produce their own application. This work uses two different built-in Simulink applications

    to communicate with the UAV. The first is the AR.Drone Simulink Development-Kit V1.1 DevKit [20], where a

    built-in inner control process loop, already running on-board of the UAV, is used and via Wi-Fi limited angular

    references can be sent. The second is the AR.Drone 2.0 Quadcopter Embedded Coder [21], which is a Target

    solution that provides full access to the hardware, allowing the replacement of the internal control processes

    and for the full software to run on the UAV processor, avoiding external processing.

    3.1.1 Hardware

    The AR.Drone 2.0 is equipped with an OMAP 3630 1GHz ARM Cortex A8, which includes a TI C64x DSP

    800MHz, and four brushless motors, each controlled by an ATMEGA8L 8bit microcontroller. Two cameras, a

    720p 30fps frontal camera and a down-facing 240p 60fps camera for optical flow are installed on-board.

    Furthermore, the UAV has a low-cost 200Hz IMU consisting of a 3-axis accelerometer, a 3-axis gyroscope, a

    3-axis magnetometer, an ultrasound and a barometer that provides inertial measurements relative to the body

    frame. All this makes it possible to estimate angles, acceleration, height, and linear and angular velocities.

    3.1.2 AR Drone Simulink Development-Kit

    The communication architecture used with this application can be seen in Fig. 3.1. All the implementation

    runs on the host computer through Simulink, and it connects to the drone via Wi-Fi. The UAV receives input

    controls of roll and pitch angles, yaw rate and vertical velocities and retrieves a packet (Navdata) with all data

    9

  • sensors and estimations of horizontal velocities, Euler angles and height. All via UDP connection. The UAV

    also streams both cameras via a TCP connection, although it is not able to stream both at the same time.

    For this experiment, the only data used from the received packet was from the gyroscope, accelerometer,

    magnetometer, optical flow velocity, ultrasound and the front camera stream.

    Ar Drone 2.0

    Laptop

    Figure 3.1: DevKit network architecture.

    3.1.3 AR.Drone 2.0 Quadcopter Embedded Coder

    The second communication approach in this thesis was to use the AR.Drone 2.0 Quadcopter Embedded

    Coder [21], which is a Target solution that allows running code on-board by compiling Simulink code and

    deploying it to the UAV board. This Target solution gives access to all the IMU data, to both cameras at the same

    time, as well to the ultrasound and barometer. The communication architecture with the Target application is

    depicted in Fig. 3.2.

    This provides the possibility to freely control the quadrotor since it gives full access to the hardware. It

    automatically compiles and deploys the Simulink model to run on the Arm Cortex 8. Unlike the factory software

    of the UAV, the optical flow is not implemented. The alternative is to use an external tracking system that

    provides body velocities or to design an optical flow algorithm to run on the DSP, which is out of the scope of

    this work.

    Ar Drone 2.0 Laptop

    Figure 3.2: Target network architecture.

    10

  • 3.2 Nonlinear model

    In order to better understand the deductions and solutions presented along this work, a summary of the

    kinematics and dynamics laws that govern a quadcopter are shown. The nomenclature and parameters re-

    garding the vehicle will be introduced.

    The physical model of the AR.Drone 2.0 is based on the Newton-Euler formalism [22], where the dynamics

    are obtained in body-fixed {B} and in a fixed inertial {I } frames, see Fig. 3.3. Each motor produces a thrust

    Ti and torque τi that in combination results in the total thrust, the roll torque, the pitch torque and the yaw

    torque.

    yB

    xB

    zB

    Body

    T2

    T3

    T4

    T1τ2

    τ3

    τ4

    τ1θθφφ

    ψ

    yI

    xI

    zI

    Inertial Frame gzI

    1

    Figure 3.3: UAV Body and Inertial frames adopted.

    Let {xB , yB , zB } describe the body-fixed frame and {xI , yI , zI } represent the inertial reference frame. The

    vector p =[

    x y z]T

    denotes the UAV center of mass position in {I }. The orientation of the rigid body with

    respect to {I } is described as λ =[ψ θ φ

    ]T, where ψ, θ and φ are the yaw, pitch and roll Euler angles in

    Z −Y − X convention, respectively. The angular rotation is denoted in {B } as ω =[ωx ωy ωz

    ]T. The full

    nonlinear dynamics of the vehicle is then given by:

    mp̈ = mg +R IB Fd (3.1a)

    Iω̇=−ω× Iω+τ (3.1b)

    where m is the mass, g = [0 0 g ]T the Earth gravitational force, I = diag([Ix Iy Iz ]) the inertia matrix of the rigid

    object, Fd =[

    fx fy fz −4∑1

    Ti

    ]T∈ {B} and τ ∈ {B} combine the principal non-conservative forces (Ti the

    thrust of rotor i and fi external forces that are neglected) and moments applied to the quadrotor air frame,

    respectively. R IB the rotation matrix from {B} to {I } with Z −Y −X Euler angles notation:

    11

  • R IB =

    cψcθ cψsθsφ− sψcφ cψsθcφ+ sψsφsψcθ sψsθsφ+ cψcφ sψsθcφ− cψsφ−sθ cθsφ cθcφ

    (3.2)

    with c and s being the shorthand forms for the trigonometric cosine and sine functions, respectively.

    To get the total thrust and total torque of the vehicle it is assumed that each torque τi generated by each

    rotor is proportional to its thrust resulting in the following input matrix:

    T

    τφ

    τθ

    τψ

    =

    1 1 1 1

    L −L −L LL L −L −L

    −C1 C2 −C3 C4

    T1

    T2

    T3

    T4

    (3.3)

    where L is the distance from the UAV center of mass to the center of the rotor, which in this particular vehicle is

    always the same, and CM =[C1 C2 C3 C4

    ]aerodynamic coefficients. Combining (3.1-3.3), the nonlinear

    dynamical model can be represented as [23]:

    ẋ = vx

    ẏ = vy

    ż = vz

    v̇x =(sψsθcφ− cψsφ

    ) Tm

    v̇y =(cψsθcφ+ sψsφ

    ) Tm

    v̇z =−cθcφT

    m+ g

    ψ̇= sφcθωy +

    cφcθωz

    θ̇ = cφωy − sφωz

    φ̇=ωx + sφtθωy − cφtθωz

    ω̇x =Iy − Iz

    Ixωyωz +

    IrIxωyΩ+

    τφ

    Ix

    ω̇y =Iz − Ix

    Iyωxωz +

    IrIyωyΩ+

    τθ

    Iy

    ω̇z =Ix − Iy

    Izωxωy +

    τψ

    Iz

    (3.4)

    withΩ=ωr ot ,2 +ωr ot ,4 −ωr ot ,1 −ωr ot ,3 being rotor gyroscopic effect, which as relevant an effect on aggressivemaneuvers and t the shorthand form for the trigonometric tangent function.

    12

  • Chapter 4

    Sensor based localization relative to a

    landmark

    In this chapter, a camera localization system relative to a static landmark is designed. In indoor environ-

    ments, the absence of a global position system makes the system able to rely only on the onboard sensors such

    as the optical flow, which by integration, will produce position drift over time. Using a localization system

    relative to a landmark avoids the possible augmentation of sensor noises by estimating sensor biases.

    4.1 Problem description

    The environment under study is depicted in Fig. 4.1, where {C } and {L} are the UAV’s camera and landmark

    orthogonal frames, respectively. In order to transform a vector position of a point from frame {C } to frame {L},

    a translation and rotation have to be applied. Let Pli be a landmark point. So the vectorLP li relates with the

    translation vector from frame {L} and {C }, LP R , with the position vector of a landmark point on frame {L},C P li ,

    according with:

    LP li =LP R +RLC C P li (4.1)

    where RLC is the rotation matrix from frame {C } to {L}.

    x

    y

    z

    XC

    YC

    ZC

    YL

    ZL

    XL

    L P R

    C P l i

    LPli

    Figure 4.1: Graphical representation of the environment under study and according to equation (4.1).

    13

  • To relate a landmark point LP li with the camera image plane a mathematical model must be used. The

    most basic and practical model that represents this relationship is the pinhole camera model [24]. This model

    describes the camera aperture as a point and assumes no lenses for focus light. Most of the times, cameras suf-

    fer from non-linearities such as radial and tangential distortion, that if severe, needs to be taken into account.

    Pinhole model defines that a 3-D point in camera’s frame, Wc = [Xc ,Yc , Zc ]T , is mapped into the projectionplane to a point xc = [x, y,1]T , defined by the intersection of the line that goes from camera center point, C =[0,0,0]T , to Wc with the image plane, zc = 1. So xc can be defined as:

    xc =[

    fxXcZc

    , fyYcZc

    ,1

    ]T(4.2)

    which is the projection from world to image coordinates. Transforming to pixel coordinates comes:

    [u, v] = [x +ox , y +oy ] (4.3)

    where [u, v] are the pixel coordinates and [ox ,oy ] are the coordinates of the principal point. Define a more

    general mapping using homogeneous coordinates as:

    u

    v

    1

    =

    1

    Zc

    fx 0 ox 0

    0 fy oy 0

    0 0 1 0

    Xc

    Yc

    Zc

    1

    = 1

    ZcKc [I |0]Wc (4.4)

    where Kc is the intrinsic matrix of the camera.

    Rewriting (4.1) in terms of the landmark position in camera frame and extend it to image plane using (4.4)

    gives:

    uli

    vli

    1

    =

    1C P l zi

    Kc [I |0]RLCT

    (LP li −

    LP R)

    (4.5)

    where C P l zi is the Z component of a point Wc , which is a landmark point position relative to {C }.

    4.2 Homography

    If all the landmark points are co-planar in 3D space, they can be mapped by a homography, a nonsingular

    3×3 matrix, Hc . Since the points are co-planar, one axis of the landmark’s frame will always have a null compo-nent. So it is possible to remove the respective column in the transformation matrix. Rewriting (4.5) in image

    coordinates in a more general form:

    14

  • XC

    YC

    ZC

    1

    i

    =R

    LC

    T −RLCT LP R

    0 1

    LP li

    1

    LPlzi=0=

    r11 r12 −C P Rxr21 r22 −C P Ryr31 r32 −C P Rz0 0 1

    LP l xiLP l yi

    1

    = Hc

    LP l xiLP l yi

    1

    (4.6)

    Expanding (4.6) in inhomogeneous coordinates X ′ = XC /ZC and Y ′ = YC /ZC :

    X ′ =h11LP lx +h12LP ly +h13h31LP lx +h32LP ly +h33

    Y ′ =h21LP lx +h22LP ly +h23h31LP lx +h32LP ly +h33

    (4.7)

    Then the set of equations becomes

    0

    T −LP Tli yiLP

    Tli

    LPTli

    0T −xi LP Tli

    h1

    h2

    h3

    = 0 ≡ Ai h = 0 (4.8)

    The equation (4.8) holds for any homogeneous transformation [xi , yi ,1] of the point x [24]. Despite the

    fact of the dimension of A being 8×9 the r ank(A) = 8, thus has one dimensional null-space which provides asolution for h. Such a solution can only be found for a non-zero scale factor. However, Hc is only determined up

    to a scale, so the solution of h gives the right Hc . A scale restriction can be made such as on its norm, ‖h‖ = 1. Aminimum of 4 non-colinear points is required to solve (4.8). If more than four points are available the solution

    is over-determined, and a singular value decomposition (SVD) can be used to find the vector h corresponding

    to the smallest singular value, known as the Direct Linear Transform (DLT). It minimizes the algebraic distance

    ‖Ah‖. Other linear and non-linear algorithms that minimize geometric error exists and shall be considered.

    4.3 Transformation invariance and normalization

    Algorithms that minimize geometric error are invariant to similarity transformations [24]. On the other

    hand, the DLT algorithm is not, since it minimizes algebraic error. To overcome this problem a normalizing

    transformation to the points before the DLT algorithm should be considered [24]. The objective of this trans-

    formation is to undo the outcome of arbitrary selection of the origin and scale in image coordinate’s frame by

    effectively choosing a canonical one instead.

    Isotropic scaling. Here both x and y coordinates are scaled equally. The coordinates are scaled so that the

    15

  • average distance of a point x from the origin isp

    2.

    Non-isotropic scaling. Experimental results suggest that non-isotropic scaling does not lead to significantly

    better results than isotropic scaling so it will not be considered.

    There is also iterative techniques that minimize a specific cost function, such as the EPnP algorithm. Ac-

    cording to [24], this is unfortunate, because iterative techniques have certain disadvantages compared to linear

    algorithms such as the normalized DLT, namely:

    (i) Slower.

    (ii) An initial estimate is usually required.

    (iii) Have the risk of getting stuck in a local minimum or even not converging.

    (iv) Selection of a stopping criterion can be difficult.

    Contrarily, they are less prone to instability in noise’s presence, especially when n ≤ 5. To get the best ofthe two worlds a non-iterative method was proposed on [25], the Robust O(n) solution to the PNP (RPnP). It

    works well for both non-redundant point sets (n ≤ 5) and redundant point sets, retrieves robust results and itscomputational complexity grows linearly with n.

    RPnP. The algorithm separates in four parts: (i) Selecting a Rotation Axis; (ii) Determination of Rotation

    Axis Using Least Square Residual; (iii) Solving the Rotation Angle and Translation Vector and (iv) Determina-

    tion of Camera Pose.

    (i ) Randomly select n edges produced by the image points and the one with most extended projection

    length∥∥Pi 0P j 0

    ∥∥ is selected, because longer edges are less affected by noise added to the endpoints. This edge

    will now be a rotation axis, based on which a new orthogonal coordinate frame {A} is created. The origin of

    {A} is at the selected edge, and the direction of the Y-axis is the same as the selected vector−−−−→Pi 0P j 0. This step is

    equivalent to the normalization part in the DLT solver.

    (i i ) Create (n−2) subsets of n reference points such as {Pi 0P j 0Pk |k 6= i 0,k 6= j 0} and then apply the 3-pointconstraint to each subset yielding (n −2) polynomial of order 4 [26]:

    f1(x) = a1x4 +b1x3 +c1x2 +d1x +e1 = 0

    f2(x) = a2x4 +b2x3 +c2x2 +d2x +e2 = 0

    ...

    fn−2(x) = an−2x4 +bn−2x3 + cn−2x2 +dn−2x +en−2 = 0

    (4.9)

    To find the local minima of (4.9) a least squares residual is used defining a cost function F =n−2∑

    i=1f 2i (x). The

    minima of F can be determined by finding the root of its derivative F ′ =n−2∑

    i=1fi (x) f

    ′i (x) = 0. x is then determined

    by the eigenvalue method [27]. Having x, the depths of Pi 0 and P j 0 can be calculated according to [28], and

    then the rotation axis of {A} is computed as y A =−−−−→Pi 0P j 0/

    ∥∥Pi 0P j 0∥∥. For each minimum found there will be a

    camera pose and translation vector to compute.

    16

  • (i i i ) When the axis y A from {A} is determined, the rotation matrix from {A} to {C } can be expressed as:

    RCA = Rar Ry (α) (4.10)

    where Rar is an arbitrary rotation matrix whose second column equals to axis y A and Ry (α) denotes a rotation

    of α degrees around Y −axi s. The projection from 3D points to the 2D normalized image plane can be givenby:

    λi

    ui

    vi

    1

    =

    1

    ZRCA

    Xi

    Yi

    Zi

    +

    tx

    ty

    tz

    (4.11)

    where t = [tx , ty , tz ] is the translation vector from {A} to {C }. Rearranging (4.11) to a 2n×6 homogeneous linearequation system gives:

    [B2n×1 C2n×1 D2n×4

    ]

    tx

    ty

    tz

    1

    = 0 (4.12)

    The solution for this system can then be retrieved using the SVD.

    (i v ) Due to the noise in the system, the solution of (4.12) may not meet the constraint c2 + s2 = 1, so thisconstraint is imposed on the rotation matrix RCA in (4.10). In order to normalize R

    CA , first an estimation of the 3D

    coordinates of reference points are made in frame {C } using the un-normalized RCA and t . With that estimated

    the normalized camera pose is the retrieved by a standard 3D alignment scheme [29].

    4.4 Pose and position from homography

    The homography matrix Hc (4.6) is scaled by an unknown factor. Since column 1 and 2 of Hc are columns

    of a rotation matrix, then its norm must be equal to one, which gives us:

    M = C P l zi Hc =[

    h1‖h1‖

    h2‖h2‖

    h3‖h1‖+‖h2‖

    2

    ]=

    r11 r12 −C P Rxr21 r22 −C P Ryr31 r32 −C P Rz

    (4.13)

    The third column of the rotation matrix RLCT

    is computed as follows:

    RLCT =

    [M1 M2 M1 ×M2

    ](4.14)

    where the property of orthogonality between the three vectors of a rotation matrix is applied. Finally, the posi-

    tion vector LP R is given by:

    17

  • LP R =−RLC M3 (4.15)

    Since the rotation matrix is the same as defined in (3.2), the Euler anglesλ can be computed as:

    ψ= ar ct an2(

    r21cθ

    ,r11cθ

    )

    θ = ar ct an2(−r31,

    √r112 + r 221

    )

    φ= ar ct an2(

    r32cθ

    ,r33cθ

    )(4.16)

    for θ ∈[−π

    2,π

    2

    ]. In case of θ =±π

    2two solutions rise:

    ψ= 0

    θ = π2

    φ= ar ct an2(r21,r22)

    ,

    ψ= 0

    θ =−π2

    φ=−ar ct an2(r21,r22)

    (4.17)

    4.5 Image distortion

    Many lens systems used on video cameras suffer from distortion. Distortion is part of the intrinsic param-

    eters of the camera. AR.Drone 2.0 front camera suffers from a high radial distortion and a negligent tangential

    distortion. When a frame is captured, the points are not the real ones but a distorted version of them, xd and

    yd . Radial distortion can be modelled by [24]:

    xd

    yd

    =

    x

    (1+k1r 2 +k2r 4

    )

    y(1+k1r 2 +k2r 4

    )

    (4.18)

    where r =√

    x2 + y2 and (k1,k2) are the radial distortion coefficients of the lenses. The solution of (4.18) can beachieved using the Newton-Raphon’s method with a quadratic order convergence or making use of a distortion

    look-up table.

    4.6 Markers segmentation

    To detect specific points in space, information about those points must be known in order to segment it

    from the background. One easy and efficient property is its colour. A colour based threshold in the YCbCr

    colour space was exploited. YCbCr space has the particular benefit of having the luma channel Y separated

    from the colour channels Cb and Cr, providing better colour detection results even in different luminosity’s

    conditions.

    The simplest way of associating a colour of an object from another would be to pick its characteristics,

    compute the distance to all the mapped objects and pick the closest object as the correct association, below

    a certain threshold. Then, proceed with the rest of the image until all the objects had been associated. This

    leads to a problem [30]: in a probabilistic setting, a marker (both observed and mapped) is represented by its

    18

  • mean and covariance. By computing the Euclidean distance, the uncertainties are simply ignored. To tackle

    this problem another type of distance can be found in literature, the Mahalanobis distance which has a scale-

    invariant property.

    According to [31], the Mahalanobis distance is given by:

    D(zc ,µ) =√

    (zc −µ)T C−1(zc −µ) (4.19)

    where C is the estimated covariance matrix of the marker’s color, µ is the estimate average and zc represents

    the the observed vector values of Cb and Cr . Wu and Ai [32], define this parameters as:

    zc =[

    Cb Cr]

    µ=[µCb µCr

    ]

    C =CCbCb CCbCr

    CCr Cb CCr Cr

    (4.20)

    With this method it is possible to segment some specific colour from an image efficiently and effectively by

    performing the following statistical test:

    D(zc ,µ)2 ≤ γ (4.21)

    where gamma is the threshold that represents the correct associations permitted to be acceptable.

    4.7 Point correspondence

    In order to solve the DLT equation (4.8) or the RPnP (4.12), the 2D image points must be sorted in the same

    form as the 3D markers point position. To undertake this issue a point association based on the angular posi-

    tion relative to the point’s centroid was done, which is detailed in Algorithm 1. Although it might look to work

    only on circular markers disposal, experimental results showed successful results for random disposal.

    1 Compute the point’s centroid in u

    and v pixel’s coordinates,

    (cu , cv );

    2 Compute the vector from centroid

    to points, p = (u −cu , v −cv );3 Compute the angle the vector

    makes with the horizontal

    according to ang l e = at an2(p2, p1);4 Sort vector ang l e in descent

    order;

    Algorithm 1: Organize Points

    u

    v

    c

    pi

    θ

    Figure 4.2: Marker distinction procedure.

    19

  • 4.8 Recursive point detection

    For the sake of optimizing the algorithm of the point detection which was purely algebraic, an algorithm

    based on the previous detection is presented. The idea is that every time the reprojection error of the previous

    step is less than the desired threshold the image point position (u, v) is sent to the next step, removing the need

    of computing (4.19) in the whole in the next step, see Algorithm 2.

    1 if Markers were not detected correctly previously then

    2 Compute i n whole i mag e (4.19)k+1 ≤ γ13 else

    4 del t a = (max(uk )−min(uk ))d ;5 for i ← 1 to leng th(poi nt s) do6 wi ndow_le f t = uk (i )−del t a;7 wi ndow_r i g ht = uk (i )+del t a;8 wi ndow_up = vk (i )−del t a;9 wi ndow_down = vk (i )+del t a;

    10 Compute (4.19)k+1 ≤ γ2 wi thi n the new li mi t s i n the i mag e11 end

    12 end

    Algorithm 2: Recursive point detection

    In Algorithm 2, γi is the Mahalanobis threshold from (4.21) and d a parameter to adjust the window size

    del t a, which is proportional to the maximum and minimum u pixel coordinate of the previous tracked mark-

    ers.

    4.9 Point rejection

    With the purpose of reject outliers from the PNP algorithm, due to noisy segmentation, the reprojection

    error is exploited. The reprojection error is a geometric error computed as the image distance between a pro-

    jected point and a measured one. It quantifies how closely a reprojected point in the camera is from the original

    point and is given by the Frobenius norm as:

    r̃pnp =√

    n∑

    i=1| xi − Ĥc LP li |2 (4.22)

    where n is the number of points.

    4.10 Image Resolution Change

    Whenever the Algorithm 2 found the markers and the total pixel area to be tracked do not exceed the max-

    imum allowed processing power an higher resolution image can be used to track the markers, see Algorithm

    20

  • 3.

    1 if Markers were detected correctly previously and delta≤κ then2 Compute i n whole i mag e (4.19)k+1 ≤ γ13 else

    4 del t a = (max(uk )−min(uk ))d ;5 for i ← 1 to leng th(poi nt s) do6 wi ndow_le f t = (uk (i )−del t a)× scale;7 wi ndow_r i g ht = (uk (i )+del t a)× scale;8 wi ndow_up = (vk (i )−del t a)× scale;9 wi ndow_down = (vk (i )+del t a)× scale;

    10 Compute (4.19)k+1 ≤ γ3 wi thi n the new li mi t s i n the i mag e wi th hi g her r esoluti onAppl y er osi on f ol lowed by di l ati on wi thi n the new li mi t s i n the i mag e

    11 end

    12 end

    Algorithm 3: Image Resolution Change

    21

  • 22

  • Chapter 5

    Sensor modeling and calibration

    In this chapter, the calibration procedure used for onboard sensors such as accelerometer, gyroscope and

    magnetometer is shown. More depth will be given to the gyroscope where a linear state space system is de-

    signed based on Allan Variance to obtain a better estimate of its bias over time, with the purpose of integrating

    this model on the attitude filter later on.

    5.1 Accelerometer

    An accelerometer measures the acceleration between an inertial frame and the gravitational attraction that

    acts on the local where operates. Usually, an ideal accelerometer is composed by a mass in a closed space that

    is free to move in a given axis, only restricted by a spring and friction. For a calibrated accelerometer, not sub-

    jected to acceleration unless gravity, its magnitude should be constant. The same hold if the sensor direction

    changes. So if a body is not accelerating it is possible to measure the gravity vector having at least a tri-axis

    accelerometer set and be able to compute the attitude of the body relative to Earth’s frame. There are many

    types of accelerometers in the market. Among these sensors, the micro-electro-mechanical systems (MEMS)

    accelerometers are one of the most popular nowadays, such as in UAVs, due to its functional dynamic speci-

    fications, the cost, size, and power requirements. In contrast, they are subject to large offsets cross-coupling

    factors, and other nonlinearities.

    5.1.1 Sensor gain and bias

    For navigation purposes, the tri-axial accelerometer onboard of the Ar Drone 2.0 is considered to be mounted

    orthogonally. An effective approach starts with the simple generalization for tri-axial accelerometer, which ac-

    counts for scale factors and bias:

    aac = Sac aacr −bac (5.1)

    where Sac is a 3×3 diagonal matrix that includes the scale factors, aacr the 3×1 acceleration raw vector andbac a 3× 1 vector representing the sensor bias. According to [33], an efficient way to compute the sensor’s

    23

  • parameters is through the equation of a sphere, instead of the ellipsoid. For a three-dimensional Cartesian

    coordinate system comes:

    (aacr xi −bx

    )2 + (aacr yi −by)2 + (aar zi −bz

    )2 = r 2 (5.2)

    This assumption is reasonable, especially when the output of the sensor’s channels is converted by the same

    A/D converter and the same reference voltage. Expanding (5.2) a problem of minimization arises, which can

    be minimized in a least square sense:

    min P (ar ) =n∑

    i=1

    (A+B axi +C ayi +Dazi −Ei

    )2

    A = b2x +b2y +b2z − r 2,

    B =−2bx , C =−2by , C =−2bz

    Ei =−(a2xi +a2yi +a

    2zi )

    (5.3)

    Therefore, it is possible to compute a first guess of the sphere’s radius, which is an approximation of the

    ellipsoid’s largest radius. If the error is acceptable, a second minimization problem arises which can be solved

    in a least square sense:

    min Q(t ) =(∑

    itx

    (ar xi −bx

    )2 + ty(ar yi −by

    )2 + tz(ar zi −bz

    )2 −1)2

    ,

    tx =1

    s2x, ty =

    1

    s2y, tz =

    1

    s2z

    (5.4)

    where sx , sy , and sz are the diag(Sa) sensor gains to be estimated. This algorithm is proposed in [33] and

    termed as "LinAutoCalibration".

    5.2 Gyroscope

    The onboard gyroscope of the Ar Done 2.0 is part of its MEMS. A gyroscope measures the angular velocity

    on the body frame and by numerical integration used to estimate the angles. The significant issue, about such

    sensor, is its drift over time, which is more evocative when considering low-cost sensors such as MEMS. Kirkko-

    Jaakkola et al. demonstrated that temperature variations affect the bias of MEMS gyroscopes [34] considerably.

    Consequently, temperature drift compensation should be applied. Allan variance study is then used to estimate

    a drift model to account other factors besides temperature.

    5.2.1 Temperature model

    The bias of a gyroscope is temperature dependent. Jeurgens in [35] conducted an experiment with the

    conclusion that an order 5th polynomial gives a good fit model for the bias versus temperature of the gyroscope.

    Following the same procedure, the obtained model is given by:

    24

  • ω̃I MU(Tg yr o

    )= c5T 5g yr o +c4T 4g yr o +c3T 3g yr o +c2T 2g yr o +c1Tg yr o +c0

    c5 =

    3.89×10−18

    −2.17×10−18

    4.22×10−19

    , c4 =

    −1.04×10−12

    5.87×10−13

    −1.13×10−13

    , c3 =

    1.11×10−7

    −6.64×10−8

    1.22×10−8

    ,

    c2 =

    −5.95×10−3

    3.42×10−3

    −6.55×10−4

    , c1 =

    1.59×102

    −9.24×101

    1.76×101

    , c0 =

    −1.70×106

    9.98×105

    −1.89×105

    (5.5)

    Applying (5.5) together with the given manufacturer sensitivity of the gyroscope, 4000/216, the angular ve-

    locity corrected by temperature can be computed in radians per second as:

    ωT cor r =π

    180

    4000(ω̂I MU − ω̃I MU )216

    (5.6)

    where ω̂I MU is the raw gyroscope in counts. Fig. 5.1 shows the fitting polynomial in the raw data and the

    corrected in counts versus the raw data. The correction is significant.

    5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8

    ·104

    0

    50

    100

    150

    Package temperature counts

    Gyr

    osco

    pera

    w

    Polynomial of 5thorderRaw w

    (a) Fitted 5th polynomial

    0 500 1,000 1,500 2,000 2,500 3,000 3,500 4,000 4,500 5,000

    0

    50

    100

    150

    Time (s)

    Gyr

    osco

    pera

    w

    Gyro counts correctedGyro raw

    (b) Corrected

    Figure 5.1: Raw measurement gyroscope data versus temperature and the respective fitted polynomial of 5th

    order (left). Raw measurement over time and the corrected gyroscope data(right).

    5.2.2 Noise model

    Compared with the traditional sensors, the MEMS gyroscopes are smaller, lighter, cheaper and higher in

    power efficiency terms. The main disadvantage is associated with long-term stability, such as bias drift and

    heavy noises. To improve the accuracy, analysis of the drift character is essential. One alternative to tackle the

    problem is making use of the Allan variance (AV) method to model the sensor noise. AV is a simple and efficient

    noise analysis method in the time domain. It can distinguish each random error of the sensor effectively and

    evaluate the gyro performance straightforwardly. Frequently, AV term is also used to refer its square root, which

    is the Allan deviation. According to [36], the AV σ2Ω(τ) can be defined either in terms of the output rateΩ(t ), or

    the output angle θav (t ), as follows:

    θav (t ) =∫ tΩ(t ′)d t ′ (5.7)

    25

  • where the average rateΩ(t ′) between times tk and tk +τ is given by:

    Ω̄k (τ) =θav (tk +τ)−θav (tk )

    τ(5.8)

    The Allan variance is then defined as:

    σ2Ω(τ) =1

    2

    〈(Ω̄k+1 − Ω̄k )2

    〉(5.9)

    where τ is an averaging time and the operator 〈...〉presents an infinite time average. In practice, it is not possibleto fulfill the requirement of infinite time average [36]. The finite time AV estimate is then given by:

    σ2Ω(τ)∼= 1

    2τ2(N −2m)N−2m∑

    k=1

    (θav k+2m −2θav k+m +θav k

    )2 (5.10)

    where N corresponds to the number of data samples and m to the size of the cluster points of τ. The results

    from AV are mainly related to five noise terms present in inertial sensors: quantization noise (QN), angle ran-

    dom walk (ARW), bias instability (BI), rate random walk (RRW) and rate ramp (RR), see (5.11). Regularly, the

    noises present are statistically independent. Therefore the AV can be computed as the sum of the squares vari-

    ance of each error type:

    σ2Ω(τ) =σ2q (τ)+σ2ar w (τ)+σ2bi (τ)+σ2r r w (τ)+σ2r r (τ)

    = 3Q2

    τ+ N

    2a

    τ+ 2B

    2

    τln(2)+ K

    2

    3τ+ R

    2

    2τ2

    (5.11)

    where Q, Na , B , K and R are the noise coefficients of QN, ARW, BI, RRW and RR, respectively. Due to different

    asymptotic properties, each noise appears on the log-log plot of σΩ(τ) with different slopes, as shown in Fig.

    5.2, according to (5.11).

    Figure 5.2: Different Noise’s Processes on an Allan deviation plot [37].

    Estimation quality of Allan Variance

    With real data, there will always be fluctuations between the Allan deviation slopes. A certain amount of

    noise would be present in the plot curve due to the uncertainty of the measured Allan Variance [38]. It is shown

    in [36] that the percentage error of AD, δAD , in estimating σΩ (τ) and with a data set of N points is given by:

    δAD =1√

    2( N

    m −1) (5.12)

    26

  • Equation (5.12) shows that the estimation error increases with τ, the accuracy’s estimation decays when τ

    increases.

    Gyroscope Model Design

    According to [39], the most predominant noises in a MEMS gyroscope are the angle random walk (ARW),

    bias instability (BI) and rate random walk (RRW). For more information about gyro’s noises and how they can

    be modelled see [36].

    ARW is a high-frequency noise modelled as a zero-mean white noise with an approximated variance:

    σ2ar w =Na 2

    ∆t(5.13)

    where ∆t is the sampling time interval and Na the ARW parameter from the Allan Deviation (AD) plot.

    BI has an impact on long term stability and can be approximated by a Markov process, [36], with a process

    standard deviation proportional to the AD parameter B. The discrete time of bω1 at time k subject to the sample-

    and-hold method is given by:

    bω1k+1 =(1− 1

    T∆t

    )bω1k + vk (5.14)

    where k the index of time t = k∆t , bω1 is a random process, T corresponds to the correlation time of BI and vka driven zero-mean white noise with variance σv

    2 = σ2b(1−e−2∆t/T ), where σ2b =

    2B 2

    πln(2) is the variance of

    BI. T can be obtained from AD plot around the BI region.

    RRW occurs whenever a zero-mean white noise is integrated over time. The discrete time of bω2 at time k

    subject to the sample-and-hold method, is given by:

    bω2k+1 = bω2k +ηk (5.15)

    where ηk is a zero-mean white noise with variance ση2 =∆tK 2, where K is the RRW parameter from AD plot.

    In line with the previously described noises formulations (5.13-5.15), the overall gyro noise model is given

    in discrete state-space form as [40]:

    bωk+1 =

    (1− 1

    T∆t

    )0

    0 1

    bω1

    bω2

    k

    +

    √σ2b

    (1−e−2∆t/T )

    √K 2∆t

    wk

    yk =[

    1 1]bω1

    bω2

    k

    +[√

    Na 2

    ∆t

    ]ζk

    (5.16)

    where wk and ζk are normally distributed zero-mean white noises.

    Experimental results

    The stochastic discrete time noise model parameters were acquired on data recorded for 80min at a fre-

    quency of 200Hz from an Ar Drone 2.0 using the temperature calibration model (5.6). The log-log plot of the

    27

  • computed Allan deviation with average time τ is shown in Fig. 5.3.

    10−2 10−1 100 101 102 103 104

    10−3

    10−2

    10−1

    τ (s)

    Alla

    nD

    evia

    tion

    (deg

    /s)

    wxwywz

    Figure 5.3: Ar Drone 2.0 gyroscope Allan deviation for 80min data.

    According to Fig. 5.3, the most predominant noises are ARW, BI and RRW. The noise coefficients are shown

    on Table 5.1 considering the uncertainty given by (5.12).

    Gyro ARW (Na)◦ /s0.5 B I (B)◦ /s RRW (K )◦ /s0.5

    ωx 0.003966±2.03×10−5 0.0041±1.02×10−4 0.000473±4.27×10−5ωy 0.004155±2.12×10−5 0.0049±8.35×10−5 0.000497±3.85×10−5ωz 0.004037±2.06×10−5 0.0011±9.15×10−5 0.000061±9.54×10−6

    Table 5.1: Gyroscope AV parameters.

    5.3 Magnetometer

    Magnetometers are a key sensor for attitude and position navigation systems. It outputs three signals, m =[mx my mz

    ]T , which are the components of the local sensor’s magnetic field. Like any other sensor, it has to

    be calibrated in order to achieve useful results.

    The magnetometer reading suffers from many types of distortion such as soft iron, hard iron, sensor nonorthog-

    onality, eletromagnetic noise, and bias, among others [41]. Only hard and soft iron distortion was considered

    since it is the most predominant disturbances.

    The hard iron distortion is a bias, denoted as bhi , and results from magnets present in the vehicle structure.

    Soft iron, compensated through a gain matrix denoted as Csi ∈M(3), results from the interaction of externalmagnetic fields with ferromagnetic materials near the sensor and it varies with the environment.

    The magnetometer model to remove hard and soft iron can be given by:

    mc =Csi (m −bhi ) (5.17)

    To find Csi and bhi a similar procedure as in the accelerometer Section 5.1 can be applied. So the parame-

    ters can be retrieved from the best-fitted ellipsoid by minimizing:

    28

  • min((m −bhi )T

    (C Tsi Csi

    )(m −bhi )−1

    )(5.18)

    which can be achieved using a singular value decomposition. For more detail consult [41].

    29

  • 30

  • Chapter 6

    Estimation and Control

    In this chapter, the theoretical background of the estimation and control approaches applied will be pre-

    sented.

    6.1 Controllability and Observability

    Controllability and observability are essential results for the understanding of a system and can be found

    in [42]. Consider the discrete linear time-invariant (LTI) systems given by:

    xk+1 = F xk +Buk ,

    yk = H xk , k ≥ 0(6.1)

    where xk is the state vector, yk the available measurement, F the state-transition matrix, B the control-input

    matrix, H the observation matrix and uk a control input.

    Definition 6.1. (Controllability) The system (6.1) is called controllable if for every state vector xk there exist an

    input signal uk which is able to transform the state to a desired state in finite time.

    To check the controllability of a linear time-invariant (LTI) system a theorem is shown:

    Theorem 6.1. The linear time-invariant system 6.1 with state vector x of dimension n, is controllable if and only

    if the controllability matrix

    Ctr b =[B ,F B , ...,F n−1B

    ](6.2)

    has column full rank.

    Proof. See [42].

    The second concept of concern, observability, implies the effect that xk has in the output yk of (6.1).

    Definition 6.2. (Observability) A system is said to be observable at a time step k0 if for a state xk0 at that time,

    there is a finite k1 > k0 such that the knowledge of the outputs y from k0 to k1 are sufficient to determine thestate k0.

    31

  • To check the observability of a LTI system a theorem is shown:

    Theorem 6.2. The linear time-invariant system 6.1 with state vector x of dimension n, is observable if and only

    if the observability matrix

    O =

    H

    HF

    ...

    HF n−1

    (6.3)

    has row full rank.

    Proof. See [42].

    6.2 Estimation

    In the absence of measurements of certain variables required for control purposes or even in the presence

    of noisy measurements, sensor fusion is essential. The objective of such tools is to take advantage of measure-

    ments that are related by a mathematical model taking into consideration present uncertainties and use that

    knowledge for the best estimation possible. A well-known method is the famous Kalman filter which was the

    selected tool to tackle the problem.

    6.2.1 Kalman Filter

    Kalman filter is an optimal linear estimator in the sense that it provides the minimum variance estimator by

    optimizing the computation of a gain matrix which minimizes the MSE (mean square error) of the state vector

    and maintains the error mean value null. The filter is designed to reduce the effects of sensory noise while it

    takes into account dynamic uncertainties (process noise). A stochastic discrete linear system can be described

    in state space form as:

    xk+1 = Fk xk +Bk uk +Gk vk ,

    yk = Hk xk +ek , k ≥ 0(6.4)

    where Gk is a matrix that stipulate how noise effects the state evolution, vk the process noise and ek the ob-

    servation noise, both assumed to be zero mean Gaussian white noise with covariance Qk and Rk , respectively.

    Two variables represent the state of the filter: x̂k|k , the a posteriori state estimate at time k given observations

    up to and including at time k; Pk|k , the a posteriori error covariance matrix (a measure of the estimated accu-

    racy of the state estimate). There are two phases, the predict phase and the update phase. The predict phase

    estimates the state at the current time step, where only the dynamics of the state vector influence the estimate

    and the error covariance matrix. In the update phase, the previous prediction is combined with the current

    observation to refine the state estimate. The iterative formulas can be seen in (6.5) and its proof found in [43]:

    32

  • Prediction

    x̂k|k−1 = Fk x̂k−1|k−1 +Bk uk−1Pk|k−1 = Fk Pk−1|k−1F Tk +GkQkGTk ,

    Update

    Kk = Pk|k−1H Tk[

    Hk Pk|k−1H Tk]−1

    x̂k|k = x̂k|k−1 +Kk(

    yk −Hk x̂k|k−1)

    Pk|k = (I −Kk Hk )Pk|k


Recommended