Visual Servoing for a Quadrotor UAV in Target TrackingApplications
by
Marinela Georgieva Popova
A thesis submitted in conformity with the requirementsfor the degree of Master of Applied Science
Graduate Department of Aerospace EngineeringUniversity of Toronto
c© Copyright 2015 by Marinela Georgieva Popova
Abstract
Visual Servoing for a Quadrotor UAV in Target Tracking Applications
Marinela Georgieva Popova
Master of Applied Science
Graduate Department of Aerospace Engineering
University of Toronto
2015
This research study investigates the design and implementation of position-based and
image-based visual servoing techniques for controlling the motion of quadrotor unmanned
aerial vehicles (UAVs). The primary applications considered are tracking stationary and
moving targets. A novel position-based tracking law is developed and integrated with
inner loop proportional-integral-derivative control algorithm. A theoretical proof for the
stability of the proposed method is provided and numerical simulations are performed
to validate the performance of the closed-loop system. A classical image-based visual
servoing technique is also implemented and a modification of the classical method is
suggested to reduce the undesirable effects due to the underactuated quadrotor system.
Finally, the case when the quadrotor loses sight of the target is investigated and several
solutions are proposed to help maintain the view of the target.
ii
Acknowledgements
First and foremost, I would like to express my gratitude to my supervisor Professor
Hugh H.T. Liu for the continuous support of my Master’s project over the past two years.
His guidance, patience, and encouragement have been a tremendous help in the successful
completion of this research study. My sincere thanks goes to my Research Assessment
Committee members, Professor Peter Grant and Professor Christopher Damaren, for
their insightful comments and valuable feedback. I am further grateful to all members
of the FSC lab for the interesting discussions and for their useful suggestions during
our weekly group meetings. I would also like to thank my family and my dear Evgeni
Dimitrov for their constant love and support.
iii
Contents
1 Introduction 1
1.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Visual Servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Visual Servoing and UAVs . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Quadrotor Control . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Problem Statement and Approach . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Thesis Contributions and Outline . . . . . . . . . . . . . . . . . . . . . . 7
2 Quadrotor Model and Control 9
2.1 Quadrotor Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Approximation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Model Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Stability Analysis of Approximation Model 17
3.1 Tracking Control and Simulink Model . . . . . . . . . . . . . . . . . . . . 17
3.2 Induced DEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Stability of Height Control . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Cascade Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Position Based Tracking Law 29
4.1 PBVS Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Estimation of GMT’s Pose . . . . . . . . . . . . . . . . . . . . . . 30
4.1.2 Control Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.1 Stationary Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2.2 Constant Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2.3 Constant Acceleration . . . . . . . . . . . . . . . . . . . . . . . . 37
iv
4.2.4 Circular Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Image-Based Visual Servoing 42
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2 Camera Model and Image Plane Dynamics . . . . . . . . . . . . . . . . . 42
5.3 Classical IBVS Control Design for the Quadrotor . . . . . . . . . . . . . 43
5.3.1 Control Law in Image Space . . . . . . . . . . . . . . . . . . . . . 43
5.3.2 Moving Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4 IBVS with Virtual Camera . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4.1 Classical IBVS with Virtual Camera . . . . . . . . . . . . . . . . 46
5.4.2 IBVS with GMT Velocity Estimation. . . . . . . . . . . . . . . . 48
5.5 Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5.1 Stationary Target . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.5.2 Moving Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6 Field of View Challenges 56
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.2 Keeping Target in FOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2.1 Managing Attitude of UAV . . . . . . . . . . . . . . . . . . . . . 57
6.2.2 Increasing FOV of UAV . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Target Leaving FOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.3.1 Dead Reckoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.3.2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7 Conclusions 69
A 70
A.1 Stability of Yaw Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
A.2 Stability of Tracking Control . . . . . . . . . . . . . . . . . . . . . . . . . 72
A.3 Stability of System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Bibliography 81
v
List of Tables
4.1 PID controller gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
vi
List of Figures
2.1 Configuration of the quadrotor UAV [5] . . . . . . . . . . . . . . . . . . . 10
2.2 Response of full and approximation models (Position) . . . . . . . . . . . 15
2.3 Response of full and approximation models (Angle) . . . . . . . . . . . . 15
2.4 Response of full and approximation models (Velocity) . . . . . . . . . . . 16
2.5 Response of full and approximation models (Angle rate) . . . . . . . . . 16
3.1 Quadrotor Tracking Control Structure . . . . . . . . . . . . . . . . . . . 24
4.1 Camera Perspective Projection Model . . . . . . . . . . . . . . . . . . . . 30
4.2 Horizontal position of the four models (Stationary target). . . . . . . . . 33
4.3 Height and yaw of the four models (Stationary target). . . . . . . . . . . 34
4.4 Comparison of thrust. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5 Horizontal position of the four models (Constant velocity). . . . . . . . . 36
4.6 Height and yaw of the four models (Constant velocity). . . . . . . . . . . 36
4.7 Horizontal position of the four models (Constant acceleration). . . . . . . 37
4.8 Height and yaw of the four models (Constant acceleration). . . . . . . . . 38
4.9 Horizontal position of the four models (Circular motion). . . . . . . . . . 39
4.10 Height and yaw of the four models (Circular motion). . . . . . . . . . . . 40
4.11 Bird’s eye view of horizontal position. . . . . . . . . . . . . . . . . . . . . 41
5.1 Desired Target View in the Image Plane . . . . . . . . . . . . . . . . . . 44
5.2 Initial Target View in the Image Plane . . . . . . . . . . . . . . . . . . . 44
5.3 IBVS Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4 Initial and desired images (Stationary target). . . . . . . . . . . . . . . . 49
5.5 Horizontal position of the models (Stationary target). . . . . . . . . . . . 50
5.6 Height and yaw of the models (Stationary target). . . . . . . . . . . . . . 50
5.7 Pitch and roll of the models (Stationary target). . . . . . . . . . . . . . . 51
5.8 Error for the models (Stationary target). . . . . . . . . . . . . . . . . . . 51
5.9 Trajectory of the target for the models (Stationary target). . . . . . . . . 52
vii
5.10 Initial and desired images (Moving target). . . . . . . . . . . . . . . . . . 53
5.11 Horizontal position of the models (Moving target). . . . . . . . . . . . . . 53
5.12 Height and yaw of the models (Moving target). . . . . . . . . . . . . . . 54
5.13 Pitch and roll of the models (Moving target). . . . . . . . . . . . . . . . . 54
6.1 Saturation Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.2 Position and velocity for models (Signal saturation). . . . . . . . . . . . . 59
6.3 Trajectory of the target for the models (Signal saturation). . . . . . . . . 60
6.4 Pitch for two models (Signal saturation). . . . . . . . . . . . . . . . . . . 61
6.5 X Position and camera position for models (Changing height algorithm). 63
6.6 Height for models (Changing height algorithm). . . . . . . . . . . . . . . 64
6.7 Y position and velocity for models (Changing height algorithm). . . . . . 66
6.8 Camera Y position for models (Dead Reckoning). . . . . . . . . . . . . . 66
6.9 Horizontal position for models (Dead reckoning and changing height algo-
rithm). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.10 Camera positions for models (Dead reckoning and changing height algo-
rithm). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.11 Height for models (Dead reckoning and changing height algorithm). . . . 68
viii
Chapter 1
Introduction
In the past several years there has been a significant increase in the interest of unmanned
aerial systems (UASs) due to their high potential for military and civil applications. In
a large variety of situations, unmanned air vehicles (UAVs) offer numerous advantages
to manned aircraft, especially when it comes to safety and cost-efficiency. In dangerous
military operations like enemy surveillance, battlefield exploration or devastated territory
monitoring, the use of UAVs virtually negates the risk for human pilots. In addition,
the usually small size of UAVs makes them economically advantageous as it greatly re-
duces their production, maintenance, operation and fuel costs [3]. Successful applications
of UAVs include surveillance, reconnaissance, battle field assessment, target designation
and monitoring, search and rescue, traffic monitoring and pipeline inspection [3], [35],
[11] and [30].
The surge in practical applications for UAVs has created a strong impetus for re-
searchers to develop new and better methods for control. One specific area, which has
experienced significant development in recent years, is vision-based control for quadrotor
UAVs. Quadrotors are of particular interest in practice, because they are excellent can-
didates for performing target tracking tasks. Specifically, their high maneuverability and
hovering capability allow them to navigate through dynamic environments and respond
more adequately to evasive target motion. In addition, they are capable of flying at lower
altitudes and their small size makes them difficult to detect, increasing their potential for
surveillance missions. From a theoretical point of view, developing control strategies for
target tracking for quadrotors has been challenging due to the complex nature of their
equations of motion and the underactuated properties of the system. Work presented in
this thesis aims to address some of these difficulties, by developing a control law that
allows a quadrotor UAV to successfully track a ground moving target (GMT) through
1
Chapter 1. Introduction 2
visual aid.
1.1 Literature Review
1.1.1 Visual Servoing
Visual servoing is a technique for controlling the motion of an object to reach a desired
position and orientation based on visual information from the environment. The idea
of using mobile cameras to control the motion of a vehicle was first introduced in the
1980s [18]. The implementation of vision sensors continues to be an active research topic
in aerospace and robotics due to its impact on increasing the versatility and application
domain of both robots and aircraft. Visual servoing systems typically use two types
of camera configurations: eye-in-hand configuration and fixed in the workspace. In the
first configuration the camera is mounted on the controlled vehicle and there is a usually
constant relationship between the pose of the camera and the pose of the vehicle. In the
second configuration, the camera is fixed in the workspace and the image is independent
of the vehicle motion. In this thesis the focus lies entirely on the eye-in-hand config-
uration. The main mechanism of the visual servoing scheme is as follows: the camera
records images from the environment and based on certain image feature points from
the observed target, information can be derived for the current position of the vehicle in
relation to the desired position. This difference in image features between the current
and desired location is then used to generate a velocity command that will force the
vehicle to adjust its position and orientation to achieve the desired state. The visual
servoing methods can be classified into two main approaches depending on whether the
visual measurements are directly used in the control law. In the position-based visual
servoing (PBVS) approach, the image features are an intermediate step in the control law
design and are used to reconstruct the 3-D Cartesian coordinates of the observed object.
The reconstruction can be performed using a single or multiple vision sensors and often
requires some knowledge of the geometry of the observed object. The advantage of PBVS
is that this technique separates the control problem (computation of the feedback signal)
from the pose estimation problem. However, since the reconstruction relies on the camera
intrinsic parameters, it is more susceptible to camera calibration errors and can lead to
loss of accuracy. In the image-based visual servoing (IBVS) the controller design is based
on the visual measurements obtained from the camera. It does not require estimation of
the 3-D geometry of the observed object and is computationally efficient. However, the
use of visual measurements complicates the controller design since it leads to a highly
Chapter 1. Introduction 3
nonlinear and coupled system. Each of the two major visual servoing schemes has certain
advantages and disadvantages and the choice which one to use depends on the applica-
tion. For example, PBVS has been determined as more appropriate in systems dealing
with moving objects because the motion is easier to express in the Cartesian frame [18].
Other researchers have proposed different approaches for visual servoing that use some
characteristics of both IBVS and PBVS. One such method is the “2 1/2 D” visual ser-
voing developed by [24] which expressed the input in part in 3D Cartesian space and in
part in 2D image space. In this technique the rotational and translational motions were
decoupled and the rotational information was based on pose estimation using epipolar
geometry and homography [9] while the translational information was directly obtained
from image features. This method lead to some improvements in stability and conver-
gence compared to PBVS and IBVS and prevents image Jacobian singularity. However,
the method had some drawbacks such as 1) the requirement to consider 8 image points
from the observed object (to construct the homography matrix) and 2) sensitivity to im-
age noise. Another recently developed hybrid approach is the partitioned visual servoing
in [10] in which the rotational and translation motions around the Z-axis were decoupled
and controlled separately from the motions around the X and Y axis. The technique has
been most often implemented for applications involving large rotations about the Z-axis
which cannot be completed by classical IBVS schemes. The partitioned visual servoing
solved a problem known as the Chaumette conundrum. This type of visual servoing has
also proposed a method to help maintain the image features of the observed object inside
the field of view of the camera by incorporating a repulsive potential function in the
control law design. Another variation of the IBVS method based on nonlinear predictive
control suggested in [14] in 2010 formulated the IBVS scheme as an optimization prob-
lem which also took into account visibility constraints and workplace constraints. Finally,
Reference [13] studied the topic of switching control for visual servoing which could be
implemented to choose the most appropriate between several lower level visual servoing
controllers. The different visual servoing controllers described above have been designed
for fully actuated six-degree of freedom systems. In this thesis, we propose both IBVS
and PBVS schemes that are specifically designed for underactuated quadrotor UAVs.
The next section outlines the recent developments in visual servoing in relation to UAVs
and discuss how we extend the existing approaches.
Chapter 1. Introduction 4
1.1.2 Visual Servoing and UAVs
The advantages of using vision sensors has attracted significant interest in the aerospace
sector. In particular, the resurgent interest in unmanned aerial vehicles during the last
decade has led to their improved performance and newly acquired capabilities, making
them suitable for a wide range of applications. UAVs primarily depend on the Global Po-
sitioning System which can cause problems in some environments where satellite signals
may be interrupted or unreliable. This has motivated researchers to investigate the use
of vision sensors as an alternative for obtaining UAV position coordinates. Some of the
visual servoing schemes described in Section 1.1.1 have been adopted for different UAV
control applications. In fact, since vision sensors are readily available in many UAV plat-
forms, the collected visual information can be easily implemented in the control loop in
place of or addition to IMU or GPS measurements. Reference [15], for example, proposed
an image-based control strategy for visual servoing that can be applied for the stabiliza-
tion of an autonomous helicopter over a marked landing pad. Similar applications were
considered in [29], where a real-time vision-based landing algorithm was developed for
an autonomous helicopter, and in [12], where vision-based control was implemented for
autonomous road following. In this thesis visual controllers are implemented for quadro-
tor UAVs.
The design of visual servoing for quadrotor UAVs has been a challenging task due
to the underactuated properties of the quadrotor system. The implementation of visual
servoing typically consists of two control loops: the outer loop (vision-based loop) creates
a command for the desired translational and rotational velocity components based on vi-
sual measurements and the inner loop forces the quadrotor to track the desired references.
Since the system is underactuated, the horizontal velocity components are coupled with
the roll and pitch angles of the vehicle. This dependence makes standard visual servo
controllers (which assume that translational velocities are controlled separately from the
angular velocities) difficult to apply. The most recent developments in visual servoing
control for quadrotors have been discussed in [7], [22], and [26]. Reference [7] compared
a hybrid visual servoing scheme to a classical IBVS scheme but did not address the neg-
ative effects of the underactuated property such as misinterpretation of image error due
to tilting and loss of field of view. Reference [22] suggested several modifications of the
classical visual servoing scheme that overcame the underactuated propery such as the in-
troduction of a virtual camera frame and adaptive gains of the visual servoing controller.
However, this reference did not provide a comparison of the results with the classical
approach and considered only the case when the observed target was static. Another
solution to counteract the negative effects of the underactuated property was given in
Chapter 1. Introduction 5
[26] where the authors proposed a new approach based on positive image feature feedback
with virtual spring to control the horizontal motion of the quadrotor. In contrast to the
existing reports, in this thesis we consider the case when the quadrotor has to track a
moving target. When implementing the IBVS method we extend the ideas in [22] to
accommodate target motion with the assumption that the quadrotor has to track only
the position of the target but not the orientation. We compare the moving target results
with the classical IBVS approach and propose a method to restore the field of view of
the target when it is lost.
Since the application of visual servoing controllers requires successful inner loop ve-
locity control of the quadrotor, we discuss some of the commonly used quadrotor control
strategies in the next section and justify our choice of PID control.
1.1.3 Quadrotor Control
Different control techniques for quadrotor stabilization have been studied extensively
for a long time in a variety of applications but more rarely in combination with visual
servoing. One of the methods to achieve a stable flight is PID control which has been
implemented in [23]. In this paper the authors demonstrated through simulations and
experiments that PID can successfully and robustly regulate the quadrotor pose (posi-
tion and orientation). Other control strategies for the quadrotor are based on nonlinear
sliding mode control [22] and adaptive backstepping [21], [25]. The advantage of the
sliding mode control is that it is robust to internal and external uncertainties. However,
an undesirable effect associated with this method is the characteristic chattering behav-
ior. Reference [21] showed that a backstepping algorithm may be successfully adopted
in combination with IBVS control for a quadrotor. Still other researchers proposed con-
trol methods such as feedback linearization [34] and switching mode predictive control
[1]. The latter has been designed to ensure accurate navigation in harsh environmental
conditions where the quadrotor is subject to forcible wind disturbances. In this thesis,
we propose the PID algorithm to control the translational velocity components of the
quadrotor and the yaw angle. The PID algorithm is chosen since maintaining the FOV
is a key aspect in vision-based navigation and we are interested in keeping the quadrotor
from performing aggressive maneuvers requiring drastic changes in pitch and roll angle.
Linearized controllers such as the PID controller provide good performance around hover
conditions and allow us to put constraints on the allowed values for the pitch and roll
angles.
Chapter 1. Introduction 6
1.2 Problem Statement and Approach
We now turn to a careful description of the problem we are trying to solve and our
methodology. As mentioned earlier, we are interested in developing a control algorithm
for a quadrotor UAV for vision-aided target tracking.
In this thesis, we assume that the target is located on the ground and never changes
its altitude, without any restrictions on its horizontal movement. Information about the
target’s position is obtained from an on-board camera, whose orientation with respect to
the quadrotor is fixed (physically, this means that the camera is mounted at the bottom
of the quadrotor and cannot move). In addition, we assume that we can measure the
pose of the quadrotor (its position, orientation, and their first derivatives). Finally, we
assume we are given some desired height and yaw angle (both constant), which we would
like the quarotor to reach. The goal then is to define a control algorithm, which forces
the quadrotor’s horizontal position to converge to that of the target, its altitude to the
desired height and its yaw to the desired value.
In order to solve this problem we propose a novel closed-loop, nested PID control
algorithm. The structure of our controller is reminiscent of the one considered in [36],
however there are several crucial differences.
1. We greatly reduce the number of PID controllers in [36], which makes the analysis of
the algorithm more straightforward and the choice of parameters more manageable.
2. We include several transformations of the outputs for our PID controllers, which
lead in the end to significantly different values for the control inputs in our system.
3. We introduce a non-linear error function h into the algorithms, which aims to
remove the need for the quadrotor to make aggressive maneuvers.
Typically in the literature, PID controllers are combined with linear error functions. One
concern with using linear error functions is that when the error is large, the control al-
gorithm produces an input requiring aggressive action to compensate the error, which
may lead to instability. The way this issue has been handled is by picking a small co-
efficient to rescale the error, but this makes the convergence happen more slowly. Our
approach aims to bypass this problem, by replacing the error-function altogether with
one, which is better behaved for large error values, but still ensures fast convergence. To
our knowledge, this thesis provides the first example of a non-linear error function used
Chapter 1. Introduction 7
in conjunction with PID controllers.
In order to validate our control algorithm, we perform a stability analysis of the
system. The complex nature of the equations of motion for the quadrotor make the
full system difficult to analyze, and so instead we consider a linearization of the system
around the stable point near hovering. We demonstrate that the full system behaves very
similarly to its linear approximation, and prove that with the control inputs we develop
one can obtain global asymptotic stability for the approximation model. In our stability
analysis we use ideas from the theory of cascade control and the Lyapunov method. We
remark that a cascade control seems to be a very good framework for proving stability for
quadrotor systems, and to our knowledge this is the first time it has been applied to UAV
models. We also mention, that we believe that our stability proof can be generalized to
incorporate the full system, or systems similar to it, although currently this appears to
be out of reach.
The stability analysis demonstrates that our choice of control inputs leads a system
closely resembling ours to the desired behavior. We further support our choice, by con-
ducting a wide variety of simulations, indicating that even the full system performs the
task of target tracking very well with our model. Specifically, we implement our control
method for both a PBVS and an IBVS tracking algorithm and show that their perfor-
mance is very good.
Finally, we investigate the cases when the target leaves the FOV. Since the information
for the target is obtained from visual data, it is important to come up with ways to handle
situations when the data is lost. Losing vision of the target can be attributed to various
issues from camera malfunction to aggressive target motion. We consider different causes
for losing FOV and propose different solutions of how to restore vision once it is lost.
1.3 Thesis Contributions and Outline
The literature review provided in Section 1.1 describes the recent developments in the ar-
eas of visual servoing, quadrotor UAV control methods, and target tracking. In summary,
this thesis investigates the intersection of these fields and addresses the question of devel-
oping reliable visual servoing target tracking algorithms in an underactuated quadrotor
system. The main contributions of this thesis include:
• development of a novel closed-loop PID control algorithm and verification of its
Chapter 1. Introduction 8
validity;
• an extension of the IBVS approach to take into account both the underactuated
dynamics and the target motion;
• development of methods for keeping the target in the FOV and restoring vision
once it is lost.
This thesis consists of seven chapters and one appendix. Chapter 1 is an intro-
duction describing the problem this project attempts to solve and summarizes previous
research done in related fields. Chapter 2 presents the nonlinear equations of motion
for the quadrotor and a simplified model of the quadrotor dynamics based on small
angle approximation. In Chapter 3 we introduce a new control strategy for tracking
moving targets and provide a closed loop stability analysis of the proposed quadrotor
control method and tracking law. Chapter 4 is a discussion of the camera model and
the position-based visual servoing scheme which is constructed using the tracking law
outlined in the previous chapter. Chapter 4 also includes numerical simulations illustrat-
ing the performance of the control law in tracking both stationary and moving targets.
A classical image-based visual servoing technique implemented to the quadrotor UAV is
presented in Chapter 5. This chapter also describes a modification of the classical ap-
proach based on a virtual camera model that aims to resolve some problems associated
with the underactuated property of the quadrotor. The performance of the suggested
IBVS methods is compared through numerical simulations in the case of tracking static
and moving targets. Chapter 6 addresses the issue of the target leaving the field of view
(FOV) of the camera. Several possible solutions are provided depending on the factors
causing the FOV loss. Chapter 7 provides a summary of the research project and pro-
poses future research directions. Finally, in Appendix 1 we supply the proofs for some of
the statements in Chapter 3.
Chapter 2
Quadrotor Model and Control
In this chapter we write down the drag free equations of motion for a quadrotor. Sub-
sequently, we linearize those equations around the stable point near hovering. This
produces a new approximation model, which is demonstrated to behave similarly to the
full model through numerical simulations.
2.1 Quadrotor Dynamics
In this section the drag free equations of motion for a quadrotor are developed in the
body-fixed frame and the inertial frame (see Figure 2.1). The main sources we use are [5]
and [6]. The origin of body-fixed frame coincides with the center of mass of the vehicle,
and the orientation of the coordinate axes is shown in Figure 2.1. The x and y axis
are chosen in the plane of vertical symmetry while the z axis is directed upwards. Let
u, v, w denote the components of the quadrotor velocity v in the body frame and p, q, r
the components of the angular velocity vector ω.
To develop the translational equations of motion, we use Newton’s second law:
f = ma = mv +mω × v
If the unit vectors of the body fixed frame are represented by {xB,yB, zB} and those in
the inertial frame by {xI,yI, zI}, the force vector becomes f = TzB −mgzI where T is
the total thrust of the four motors of the quad rotor. To express f in the body frame,
we use the rotation matrix between the inertial and the body frame, given by:
9
Chapter 2. Quadrotor Model and Control 10
Figure 2.1: Configuration of the quadrotor UAV [5]
CBI =
cosψ cos θ sinψ cos θ − sin θ
cosψ sin θ sinφ− sinψ cosφ sinψ sin θ sinφ+ cosψ cosφ cos θ sinφ
cosψ sin θ cosφ+ sinψ sinφ sinψ sin θ cosφ− cosψ sinφ cos θ cosφ
where ψ, θ, φ represent the Euler angles shown in Figure 2.1.
The translational equations of motion are therefore:0
0
T
+ CBI
0
0
−mg
= m
uvw
+m
qw − vrur − pwpv − uq
After rearranging, we obtain:
u = g sin θ + vr − qw
v = −g cos θ sinφ+ pw − ur
w = T/m− g cos θ cosφ+ uq − pv
The rotational (moment) equations in the body fixed frame are derived using Euler’s
rotation equation given by:
I · ω + ω × (I · ω) = M
Chapter 2. Quadrotor Model and Control 11
Expanding the result above leads to the following set of equations:
p = (Mx − (Izz − Iyy) qr) /Ixxq = (My − (Ixx − Izz) pr) /Iyyr = Mz/Izz.
(2.1)
In the moment equations, Ixx, Iyy, Izz represent the quadrotor’s inertial moments while
Mx, My, and Mz are the moments of the corresponding axes generated by thrust differ-
ences from opposing motors. The total force T and the moments Mx, My, and Mz are
derived from the thrust force Fi for i = 1, 2, 3, 4 generated by each of the four motors by
the following relation: T
Mx
My
Mz
=
1 1 1 1
0 −l 0 l
−l 0 l 0
−µ µ −µ µ
F1
F2
F3
F4
where l is the distance between a motor and the center of the quadrotor, µ is a torque
coefficient.
We next derive the equations in the inertial frame. The state vector is given by
r = [x, y, z, x, y, z, φ, θ, ψ, φ, θ, ψ]T . From Newton’s second law we have
m
xyz
=
0
0
−mg
+ C−1BI
0
0
T
,from which we get the equations
x = Tm
(sinφ sinψ + cosφ cosψ sin θ)
y = Tm
(cosφ sin θ sinψ − sinφ cosψ)
z = Tm
cos θ cosφ− g
(2.2)
The relation between the angular rates p, q, r and the rate of change of the Euler angles
can be expressed as (see [5]): φθψ
= C2
pqr
,
Chapter 2. Quadrotor Model and Control 12
where
C2 =
1 sinφ tan θ cosφ tan θ
0 cosφ − sinφ
0 sinφ sec θ cosφ sec θ
.The above expression can be differentiated and equation (2.1) used to represent θ, φ, ψ
in terms of T,M1,M2,M3 and other relevant constants (m, l, etc.). However, such a rep-
resentation is complicated and a simplified one will be proposed in the next subsection,
where we linearize the above system around its stable point near hover position.
Finally, we remark that one has the following relationship between body-fixed frame
and inertial frame velocities. Let x = [x, y, z]T , θ = [θ, φ, ψ], and v and ω denote the
velocity vector and angular velocity vector in the body-fixed frame. Then the relationship
is given explicitly as: [x
θ
]=
[C−1BI 0
0 C2
][v
ω
](2.3)
2.2 Approximation Model
In the previous section we presented the equations of motion for the quadrotor, which
we now linearize around the stable point near hovering. This corresponds to taking the
angles φ and θ to be small and T close to mg. It will be convenient for us to change
notation slightly and denote rA = [xA, yA, zA, xA, yA, zA, φA, θA, ψA, φA, θA, ψA]T the pose
of the quadrotor in inertial frame (here A stands for “aircraft”).
2.2.1 Equations of Motion
If the quadrotor is moving at constant velocity, the pitch and roll angles are both zero,
and the thrust is equal to the quadrotor’s weight. Consequently, if we assume that
the quadrotor does not perform too aggressive maneuvers the pitch and roll angles will
both be very small. This allows us to use the small angle approximation sinα ≈ α
and cosα ≈ 1. We remark that the latter approximations are very good, whenever α is
less than ten degrees. Simulations, performed in later chapters indicate that for a large
variety of cases the pitch and roll remain within such a range. Substituting sin θA with
θA, sinφA with φA and cos θA and cosφA with 1 in equation (2.2) we get[xA
yA
]= g
[cosψA sinψA
sinψA − cosψA
][θA
φA
]⇐⇒
[θA
φA
]=
1
g
[cosψA sinψA
sinψA − cosψA
][xA
yA
](2.4)
Chapter 2. Quadrotor Model and Control 13
Next the small angle approximation, allows one to replace C2 with the identity matrix,
so that the Euler angles rates and accelerations φA, θA, ψA, φA, θA, ψA and the body-
angular rates and accelerations pA, qA, rA, pA, qA, rA become approximately the same.
This together with equation (2.1) allows one to write
φA =(Mx − (Izz − Iyy) θAψA
)/Ixx
θA =(My − (Ixx − Izz) φAψA
)/Iyy
ψA = Mz/Izz.
(2.5)
The next task is to designate our control inputs. Quadrotors are under-actuated systems,
in which four inputs are used to control motions in six degrees of freedom. In the
literature several different control inputs have been chosen, but typically they are linear
transformations of the four motor thrusts. For example in [6], [32] the torques and total
thrust were controlled, in [17] the control inputs were the thrusts themselves. In our
case we choose U1 = T, U2 = My, U3 = Mx, U4 = Mz. It should be noted that in view of
equations (2.5) by choosing to control Mx,My and Mz we can assign desired values for
θA, φA and ψA. With the specified control inputs the equation of motion system is now
given by:
d
dt
xA θA
yA φA
zA ψA
xA θA
yA φA
zA ψA
=
xA θA
yA φA
zA ψA
g cosψAθA + g sinψAφA U ′2
g sinψAθA − g cosψAφA U ′3U1
mcos θA cosφA − g U ′4
, (2.6)
where
U ′2 =(U2 − (Ixx − Izz) φAψA
)/Iyy U ′3 =
(U3 − (Izz − Iyy) θAψA
)/Ixx U ′4 = U4/Izz.
2.2.2 Model Validation
In order to assess the validity of the approximation model, its response to various
inputs was compared to the response of the full model. In this simulation, signals
for T,Mx,My, and Mz were chosen and passed through both models, which have the
same initial condition rA = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The parameters m = 0.81kg,
Ixx = Iyy = 0.00676kg.m2, Izz = 0.00158kg.m2 in all simulations are based on the
Chapter 2. Quadrotor Model and Control 14
quadrotor vehicle from the Flight Systems and Controls Laboratory. For T we chose a
sine signal, centered at mg (in our case m = 0.81kg and g = 9.81m/s2, so mg = 7.9461N)
with amplitude 0.02N and frequency 1 rad/s. The signals for Mx and My were chosen
to be sine as well with amplitude 0.0001N ·m and frequencies 1 rad/s and 2 rad/s re-
spectively. The input for Mz was a step signal, with step time at 1 sec and step size
0.0001N ·m. Each model has twelve outputs: x, y, z, θ, φ, ψ, x, y, z, θ, φ, ψ, and they are
compared in the remainder of this subsection. We remark that the following is a represen-
tative result; simulations were run for a variety of initial conditions and input signals. In
general, provided that the roll and pitch angles are small the two models agree very well;
however, if they become large the models begin to deviate from each other as is expected.
In Figure 2.2 the positions x, y, z for the two models are compared, in Figure 2.4 the
velocities x, y, z are presented. Figure 2.3 compares the Euler angles, and Figure 2.5 their
rates. From these Figures we make the following observations:
1. While the values for θ and φ (i.e. pitch and roll) are small, the two models agree
very well on all counts.
2. As the pitch and roll become large (from Figure 2.3 for example the roll reaches
≈ 10 degrees) the values for θ and φ begin to differ significantly between the two
models. The other outputs, do not exhibit such big difference, because changes in
the angle rate, has not had enough time to influence the other outputs. Thus the
roll and pitch rate are the most sensitive to the approximation.
3. The yaw and its rate are influenced very little by the approximation, even when it
is large. This is of course expected as we did not impose small yaw angle approxi-
mation in the model.
4. The inertial frame position and velocity match better than the angles and their
rates. This makes the approximation model suitable for target tracking, as opposed
to landing for example, where orientation is more important. Since the problem we
consider is that of target tracking we find that the approximation model is suitable
for our purposes.
Chapter 2. Quadrotor Model and Control 15
(a) Horizontal position response. (b) Vertical position response.
Figure 2.2: Response of full and approximation models (Position)
(a) Pitch and roll response. (b) Yaw response.
Figure 2.3: Response of full and approximation models (Angle)
Chapter 2. Quadrotor Model and Control 16
(a) Horizontal velocity response. (b) Vertical velocity response.
Figure 2.4: Response of full and approximation models (Velocity)
(a) Pitch and roll rate response. (b) Yaw rate response.
Figure 2.5: Response of full and approximation models (Angle rate)
Chapter 3
Stability Analysis of Approximation
Model
In this chapter we describe our proposed control algorithm. Then we show that for the
approximation model, the control inputs lead to a certain system of differential equations.
The stability of the obtained system is shown using the theory of cascade control.
3.1 Tracking Control and Simulink Model
In this section we assume that we have a ground moving target (GMT), whose position
is given by (xT , yT ) (here T stands for “target”). We assume that the position is four
times differentiable (as a function of time) and the values of xT , xT , xT ,...x T ,
....x T yT , yT ,
yT ,...y T ,
....y T are all known. In addition, we assume we are given a target yaw ψT and
target height zT , which are constant. Finally, we assume that we can measure the state
of the quadrotor, i.e. we know the value of rA. The goal is to develop a control so that
the quadrotor can successfully track the GMT and at the same time, reach the desired
height and yaw angle. In order to achieve this task we need to design particular values
for the control inputs U1, U2, U3 and U4.
Remark: The high-differentiability of the target position is a very mild condition,
and most reasonable types of motion, like constant acceleration, constant velocity, circu-
lar motion are all infinitely differentiable. In addition, we mention that the assumption
we know the higher order derivatives of motion for the target is practically unfeasible,
however it is necessary for the stability analysis. In later chapters where we perform
simulations, we will only measure the target’s position and velocity from visual data and
set all other derivatives to zero. What this means mathematically is that in the interval
17
Chapter 3. Stability Analysis of Approximation Model 18
of time, between measurements of the position, we assume that the target is moving
with constant velocity equal to the average velocity during that time interval. As will
be shown, the proposed control algorithm still successfully tracks the GMT, even if only
information about the position and velocity is used.
The quadrotor is entirely controlled by PID controllers, which we split into two groups.
The first group controls the yaw angle and height, and is independent of the second
group, which controls the horizontal position of the quadrotor through the pitch and roll
accelerations. In view of equation (2.6) we have that the equations for zT and ψT are
decoupled from those for the other variables, so they can be treated separately. For the
altitude we construct a desired velocity in the z direction, denoted by (z)d, which is such
that the quadrotor reaches zT . One then passes (z)d− zA through a PD controller, whose
output is denoted by U ′1. The thrust control input is then defined to be
U1 =1
cos θA cosφAm(U ′1 + g).
Similarly, we construct (ψ)d and pass (ψ)d − ψA through a PD controller, whose output
is denoted by U ′4. We set
U4 = IzzU′4.
The control of the roll and pitch accelerations is more subtle and will be further
explained in the next section. We begin by constructing desired horizontal velocities (x)d
and (y)d. Using these values we define Inx and Iny as
Inx = (x)d − xA Iny = (y)d − yA.
These are passed through PD controllers, and the outputs are called Outx and Outy.
One then defines [Inθ
Inθ
]=
1
g
[cosψA sinψA
sinψA − cosψA
][xT +Outx
yT +Outy
]−
[θA
φA
]
These values are then passed through PD controllers, and the outputs are called Outθ
and Outφ. One then defines[Inθ
Inθ
]=
1
g
[cosψA sinψA
sinψA − cosψA
][...x T...y T
]+ψAg
[− sinψA cosψA
cosψA sinψA
][xT
yT
]+
[Outθ
Outφ
]−
[θA
φA
].
Chapter 3. Stability Analysis of Approximation Model 19
The values Inθ and Inφ are passed through last pair of PI controllers and the outputs
are called Outθ and Outφ. One defines
[U ′2
U ′3
]=
1
g
[cosψA sinψA
sinψA − cosψA
][....x T
....y T
]+
2ψAg
[− sinψA cosψA
cosψA sinψA
][...x T...y T
]+
U4
gIzz
[− sinψA cosψA
cosψA sinψA
][xT
yT
]+ψ2A
g
[− cosψA − sinψA
− sinψA cosψA
][xT
yT
]+
[Outθ
Outφ
].
The inputs U2 and U3 for the roll and pitch rate are given by
U2 = IyyU′2 + (Ixx − Izz)φAψA U3 = IxxU
′3 + (Izz − Iyy)θAψA
We remark that the control inputs U2, U3, U4 are designed to ensure that θA = U ′2,
φA = U ′3 and ψA = U ′4, which is the main reason behind their design.
Before we proceed to the next section, where the meaning of the above controller is
more carefully explained, we give an intuitive description of what the controller does and
also discuss the initial desired translational velocities and yaw rate fed into the system.
We have the following desired velocities and rate
(x)d = xT + λxh(xT − xA)
(y)d = yT + λyh(yT − yA)
(z)d = λzh(zT − zA)
(ψ)d = λψ(ψT − ψA),
(3.1)
where h(x) = x1+|x| . The first two equations should be understood as follows: the first
term is necessary to match the target velocity, while the second one drives the displace-
ment to 0. In fact, we observe that h(x)x ≥ 0 with equality if and only if x = 0, thus
the second term increases the velocity precisely when the target is ahead of the UAV and
decreases it when it is behind. The reason we choose the function h(xT −xA) as opposed
to what is more common in the literature, just xT − xA, is that h is a bounded function.
This ensures that value we pass is not too large, preventing aggressive maneuvers of the
quadrotor. The latter is especially important, since having low signals for velocity leads
to lower values of the pitch and roll, which is consistent with the assumptions we made for
the linearized system. Similarly, for the z direction, we do not want to change the thrust
too much as it breaks the near hover condition we assume. We remark that near 0 the
Chapter 3. Stability Analysis of Approximation Model 20
function h(x) looks like x, however its boundedness means that it acts like a saturation
function for the input.
3.2 Induced DEs
The key behind understanding the proposed desired values is to see that they induce
particular differential equations for the horizontal differences xA − xT and yA − yT . We
begin by recalling equations (2.4), differentiating the second one once and the first one
twice:[xA
yA
]= g
[cosψA sinψA
sinψA − cosψA
][θA
φA
]⇐⇒
[θA
φA
]=
1
g
[cosψA sinψA
sinψA − cosψA
][xA
yA
](3.2)
[θA
φA
]=
1
g
[cosψA sinψA
sinψA − cosψA
][ ...xA...yA
]+
1
gψA
[− sinψA cosψA
cosψA sinψA
][xA
xA
](3.3)
[....x A
....y A
]= g
[cosψA sinψA
sinψA − cosψA
][θA
φA
]+ 2ψAg
[− sinψA cosψA
cosψA sinψA
][θA
φA
]+ (3.4)
ψAg
[− sinψA cosψA
cosψA sinψA
][θA
φA
]+ ψ2
Ag
[− cosψA − sinψA
− sinψA cosψA
][θA
φA
].
Equation (3.4) shows that in a sense, by controlling θA and φA one controls....x A and
....y A.
The latter is especially clear when ψA ≡ 0, in which case the dependence is explicitly
given by....x A = gθA and
....y A = gφA. Since our problem is that of target tracking, we
want to design the controllers so that the induced differential equations for xA and yA
force xA − xT and yA − yT to both converge to 0 as time goes to infinity. As will be
seen in the next chapter this will indeed happen if we define U1, U2, U3 and U4 as in the
previous section. Recall that picking these control values forces θA = U ′2, φA = U ′3 and
ψA = U ′4, with U ′2, U ′3 and U ′4 defined as in the previous section. We recall[U ′2
U ′3
]=
1
g
[cosψA sinψA
sinψA − cosψA
][....x T
....y T
]+
2ψAg
[− sinψA cosψA
cosψA sinψA
][...x T...y T
]+
Chapter 3. Stability Analysis of Approximation Model 21
U4
gIzz
[− sinψA cosψA
cosψA sinψA
][xT
yT
]+ψ2A
g
[− cosψA − sinψA
− sinψA cosψA
][xT
yT
]+
[Outθ
Outφ
].
Substituting this equation into equation (3.4) and also putting U4 = IzzU′4 = IzzψA we
get[....x A
....y A
]=
[....x T
....y T
]+ 2ψA
[0 1
−1 0
][...x T...y T
]+ ψA
[0 1
−1 0
][xT
yT
]+ ψ2
A
[−1 0
0 −1
][xT
yT
]+
+g
[cosψA sinψA
sinψA − cosψA
][Outθ
Outφ
]+2ψAg
[− sinψA cosψA
cosψA sinψA
][θA
φA
]+ψAg
[− sinψA cosψA
cosψA sinψA
][θA
φA
]+
+ψ2Ag
[− cosψA − sinψA
− sinψA cosψA
][θA
φA
].
We may now substitute θA and φA from equation (3.3) in the above expression to get[....x A
....y A
]=
[....x T
....y T
]+ 2ψA
[0 1
−1 0
][...x T...y T
]+ ψA
[0 1
−1 0
][xT
yT
]+ ψ2
A
[−1 0
0 −1
][xT
yT
]+
+g
[cosψA sinψA
sinψA − cosψA
][Outθ
Outφ
]+ 2ψA
[0 −1
1 0
][...xA...y A
]+ 2ψ2
A
[xA
yA
]
+ψAg
[− sinψA cosψA
cosψA sinψA
][θA
φA
]+ ψ2
Ag
[− cosψA − sinψA
− sinψA cosψA
][θA
φA
].
Substituting θA and φA from equation (3.2) in the above we get:[....x A
....y A
]=
[....x T
....y T
]+ 2ψA
[0 1
−1 0
][...x T...y T
]+ ψA
[0 1
−1 0
][xT
yT
]+ ψ2
A
[−1 0
0 −1
][xT
yT
]+
+g
[cosψA sinψA
sinψA − cosψA
][Outθ
Outφ
]+2ψA
[0 −1
1 0
][...xA...y A
]+2ψ2
A
[xA
yA
]+ψA
[0 −1
1 0
][xA
yA
]−ψ2
A
[xA
yA
].
If we set ∆x = xA − xT and ∆y = yA − yT , then the above becomes[ ....∆x....∆y
]= 2ψA
[0 −1
1 0
][ ...∆x...∆y
]+ψA
[0 −1
1 0
][∆x
∆y
]+ψ2
A
[∆x
∆y
]+g
[cosψA sinψA
sinψA − cosψA
][Outθ
Outφ
].
(3.5)
Chapter 3. Stability Analysis of Approximation Model 22
Next we assume that the PI controllers, whose outputs are Outθ and Outφ are just
proportional controllers, with parameter Pθ = Pφ = A. This means that
[Outθ
Outφ
]= A
[Inθ
Inφ
].
Substituting the formula for Inθ and Inφ and θA and φA from equation (3.3), we get
[Outθ
Outφ
]=A
g
[cosψA sinψA
sinψA − cosψA
][...x T...y T
]+ A
ψAg
[− sinψA cosψA
cosψA sinψA
][xT
yT
]+ A
[Outθ
Outφ
]
−Ag
[cosψA sinψA
sinψA − cosψA
][ ...xA...yA
]− A
gψA
[− sinψA cosψA
cosψA sinψA
][xA
xA
]=
−Ag
[cosψA sinψA
sinψA − cosψA
][ ...∆x...∆y
]− AψA
g
[− sinψA cosψA
cosψA sinψA
][∆x
∆y
]+ A
[Outθ
Outφ
]Next we assume that the PD controllers, whose outputs are Outθ and Outφ are just
proportional controllers, with parameter Pθ = Pφ = B. This means that[Outθ
Outφ
]= B
[Inθ
Inφ
].
Substituting the formula for Inθ and Inφ and θA and φA from equation (3.2), we get[Outθ
Outφ
]= B
(1
g
[cosψA sinψA
sinψA − cosψA
][xT +Outx
yT +Outy
]− 1
g
[cosψA sinψA
sinψA − cosψA
][xA
yA
]).
We may now substitute this above to get[Outθ
Outφ
]=−Ag
[cosψA sinψA
sinψA − cosψA
][ ...∆x...∆y
]− AψA
g
[− sinψA cosψA
cosψA sinψA
][∆x
∆y
]
−ABg
[cosψA sinψA
sinψA − cosψA
][∆x
∆y
]+AB
g
[cosψA sinψA
sinψA − cosψA
][Outx
Outy
].
Chapter 3. Stability Analysis of Approximation Model 23
Next we assume that the PD controllers, whose outputs are Outx and Outy are just
proportional controllers, with parameter Px = Py = C. This means that[Outx
Outx
]= C
([(x)d
(y)d
]−
[xA
yA
]).
Substituting the formula for (x)d and (y)d and θA, we get[Outx
Outy
]= C
([−λxh(∆x) + xT
−λyh(∆y) + yT
]−
[xA
yA
]).
Substituting the latter above we get[Outθ
Outφ
]=−Ag
[cosψA sinψA
sinψA − cosψA
][ ...∆x...∆y
]− AψA
g
[− sinψA cosψA
cosψA sinψA
][∆x
∆y
]
−ABg
[cosψA sinψA
sinψA − cosψA
][∆x
∆y
]− ABC
g
[cosψA sinψA
sinψA − cosψA
][∆x
∆y
]
−ABCg
[cosψA sinψA
sinψA − cosψA
][λxh(∆x)
λyh(∆y)
].
We can finally substitute the above into equation (3.5) and get[ ....∆x....∆y
]= 2ψA
[0 −1
1 0
][ ...∆x...∆y
]+ψA
[0 −1
1 0
][∆x
∆y
]+ψ2
A
[∆x
∆y
]−A
[ ...∆x...∆y
]−AψA
[0 1
−1 0
][∆x
∆y
]−
−AB
[∆x
∆y
]− ABC
[∆x
∆y
]− ABC
[λxh(∆x)
λyh(∆y)
]Setting D = λx = λy and rearranging the above we get:[ ....
∆x....∆y
]= −A
[ ...∆x...∆y
]− AB
[∆x
∆y
]− ABC
[∆x
∆y
]− ABCD
[h(∆x)
h(∆y)
]+
(ψ2A + (−ψA − AψA)
[0 1
−1 0
])[∆x
∆y
]+ 2ψA
[0 −1
1 0
][ ...∆x...∆y
](3.6)
Chapter 3. Stability Analysis of Approximation Model 24
The remaining differential systems for ∆z = zA − zT and ∆ψ = ψA − ψT are given as
follows (we recall that zT and ψT are assumed constant so their derivatives vanish).
d
dt
[∆ψ
∆ψ
]=
[∆ψ
−aψ∆ψ − bψ∆ψ
], (3.7)
where aψ = λψPψ
1+Dψand bψ =
Pψ+λψDψ1+Dψ
are positive constants and Pψ, Dψ denote the PD
constants for the yaw controller.
d
dt
[∆z
∆z
]=
∆z
−az ∆z1+|∆z| − bz∆z − cz
∆z(1+|∆z|)2
, (3.8)
where az = λzPz
1+Dz, bz = Pz
1+Dzand cz = λzDz
1+Dzare positive constants where Pz, Dz denote
the PD constants for the altitude controller. A block diagram of the control method is
provided in Figure 3.1. The stability of the systems presented in (3.8) can be treated
separately from the other equations, because of our assumptions. However, the systems
(3.6) and (3.7) are connected and must be treated simultaneously. Nevertheless, we will
show that the system is stable for certain values of the proportional control constants
A,B,C,D in (3.6). The precise statements and proofs are given in the next sections.
Figure 3.1: Quadrotor Tracking Control Structure
Chapter 3. Stability Analysis of Approximation Model 25
3.3 Stability of Height Control
In this section we analyze the stability of the altitude control. From Section 3.2 equation
(3.8) we have that the differential equation, governing the height is given by
d
dt
[∆z
∆z
]=
∆z
−az ∆z1+|∆z| − bz∆z − cz
∆z(1+|∆z|)2
, (3.9)
where we recall that az, bz, cz are positive constants and ∆z = zA − zT is the difference
between the quadrotor height and the target height that must be reached. We will show
that the above is globally asymptotically stable, converging to 0 as time goes to infinity.
Consider the following Lyapunov function candidate
V (x, y) =1
2y2 + (az + A)(|x| − log(1 + |x|)) +B
xy
1 + |x|,
where the constants A,B are to be chosen later. In order to demonstrate that the
solutions of the differential equation are globally asymptotically stable it is sufficient to
establish the following properties for V (see Theorem 3.2 in [20]):
1. V (x, y) ≥ 0 with equality if and only if x = y = 0.
2. V (∆z, ∆z) ≤ 0, with equality if and only if ∆z = ∆z = 0.
3. V (x, y)→∞ as√x2 + y2 →∞.
We first start with condition 2. and observe that
V (∆z, ∆z) = −az∆z∆z
1 + |∆z|− bz∆z
2 − cz∆z
2
(1 + |∆z|)2
+(az + A)∆z∆z
1 + |∆z|+B
∆z2
(1 + |∆z|)2−Baz
∆z2
(1 + |∆z|)2−Bbz
∆z∆z
1 + |∆z|−Bcz
∆z∆z
(1 + |∆z|)2
We now choose A = Bbz and upon cancellation and regrouping terms the above becomes
V (∆z, ∆z) = −bz∆z2−cz
∆z2
(1 + |∆z|)2+B
∆z2
(1 + |∆z|)2−Baz
∆z2
(1 + |∆z|)2−Bcz
∆z∆z
(1 + |∆z|)2
The goal will be to choose B small enough and positive. From s2 + t2 ≥ 2|st| we know
Chapter 3. Stability Analysis of Approximation Model 26
that ∣∣∣∣∣Bcz ∆z∆z
(1 + |∆z|)2
∣∣∣∣∣ ≤ Baz2
∆z2
(1 + |∆z|)4+Bcz2az
∆z2
(1 + |∆z|)4.
Consequently we see that
V (∆z, ∆z) ≤ −(bz +
cz −B(1 + |∆z|)2
− Bcz2az
1
(1 + |∆z|)4
)∆z
2−(Baz −
Baz2
1
(1 + |∆z|)2
)∆z2
(1 + |∆z|)2.
So if we choose B small enough and positive we will have
V (∆z, ∆z) ≤ −c1∆z2 − c2
∆z2
(1 + |∆z|)2,
for some positive constants c1, c2. This proves Property 2.
Property 3. is easy to see. Indeed, since by definition A > 0 and |x|− log(1+ |x|) ≥ 0,
we have
V (x, y) ≥ 1
2y2 −B|y|,
which goes to ∞ along any sequence (xn, yn) with |yn| → ∞. Thus we only need to
consider sequences (xn, yn) along which |yn|, remains bounded, say by M . In the latter
case, |xn| is forced to go to infinity and so
V (xn, yn) ≥ (az + A)(|xn| − log(1 + |xn|))−BM →∞
where we used that |x| − log(1 + |x|)→∞ as |x| → ∞. This proves property 3.
Finally, Property 1 follows by Lemma 2. Indeed, if we chose B sufficiently small so
that we have 2(az + A) ≥ B2, then the conclusion of Lemma 2 is precisely Property 1.
Hence, Properties 1,2,3 are all satisfied and by Theorem 3.2 in [20] the system is globally
asymptotically stable, as claimed.
3.4 Cascade Systems
In section 3.2 we derived a system of differential equations, whose stability we now an-
alyze. We phrase our problem in the language of cascade systems, which allows us to
reduce the question of global asymptotic stability to verifying three conditions. These
conditions are subsequently proved in the appendix at the end of the thesis. We start
with the definition of cascade systems.
Chapter 3. Stability Analysis of Approximation Model 27
Suppose we have the following nonlinear cascade system
x = f(x, ω)
ω = s(ω),(3.10)
where x(t) ∈ Rn and ω(t) ∈ Rm. We assume that f : Rn × Rm → Rn and s : Rm → Rm
are C1 vector fields. We assume that f(0, 0) = 0 and s(0) = 0, so that (x, ω) = (0, 0) is
an equilibrium of the cascade system. In [31] the following result was proved:
Theorem 1. Suppose the following assumptions are satisfied:
1. x = 0 is a globally asymptotically stable solution to the system x = f(x, 0),
2. ω = 0 is a globally asymptotically stable solution to the system ω = s(ω).
3. For any initial condition (x(0), ω(0)) the trajectories (x(t), ω(t)) remain bounded
for t > 0.
Then one has that (0, 0) is a globally asymptotically stable solution to the cascade system
(3.10).
What the above result roughly says is that if the control law s forces ω to converge
to 0, and with ω ≡ 0 the control law f forces x to converge to 0, then the whole system
(x, ω) converges to 0 for any initial condition. The reason we are interested in the above
result is that equations (3.6) and (3.7) can be described as a cascade system. Utilizing
this structure will allow us to establish global asymptotic stability for the entire system,
by verifying the conditions in the above theorem.
The first task is to phrase our problem in the language of cascade systems. Let
ω = (ω1, ω2) ∈ R2, and x = (x1, x2, · · · , x8) ∈ R8. We let s : R2 → R2, given by
s(ω1, ω2) = [ω2,−aψω1 − bψω2]T .
From equation (3.7) and its derivative we know that
d
dt
[∆ψ
∆ψ
]=
[∆ψ
−aψ∆ψ − bψ∆ψ
]
Chapter 3. Stability Analysis of Approximation Model 28
We thus see that ω = (∆ψ, ∆ψ) is a solution to the differential equation
ω = s(ω).
Next let f : R10 → R8 be given by
f(x1, x2, · · · , x8, ω1, ω2)i = xi+1 for i = 1, 2, 3, 5, 6, 7 and[f(x1, x2, · · · , x8, ω1, ω2)4
f(x1, x2, · · · , x8, ω1, ω2)8
]= −A
[x4
x8
]− AB
[x3
x7
]− ABC
[x2
x6
]− ABCD
[h(x1)
h(x5)
]+
(ω2
2 + (aψω1 + bψω2 − Aω2)
[0 1
−1 0
])[x2
x6
]+ 2ω2
[0 −1
1 0
][x4
x8
].
Then in view of equations (3.6) and (3.7) we see that x = (∆x, ∆x, ∆x,...
∆x,∆y, ∆y, ∆y,...∆y),
ω = (∆ψ, ∆ψ) is a solution to the cascade system
x = f(x, ω)
ω = s(ω).
In reconciling the above system with equations (3.6) and (3.7) we used that ∆ψ = ψA
since ψT is assumed constant. Since h is C1, we conclude that f and s are C1 vector
fields. In addition, it is clear that f(0, 0) = 0 and s(0) = 0. Thus the above is indeed of
the form described in (3.10). It thus follows from Theorem 1 that in order to prove the
global asymptotic stability of our system it is sufficient to verify conditions 1. through
3. This will be done in the appendix.
Chapter 4
Position Based Tracking Law
In this section we develop a position based visual servoing (PBVS) model for target
tracking. The proposed model is implemented both for the full and approximation models
and their performance is analyzed through several simulations.
4.1 PBVS Navigation
We will consider the following problem. Suppose that a camera is mounted on the
quadrotor and is fixed, so that its orientation changes with the orientation of the UAV.
We assume that all relevant camera parameters are known. There is a GMT, observed
with the camera, which represents a point in the image frame and measurements for its
coordinates are available for all time. In addition, we assume that the quadrotor has
knowledge of its position and velocity in the inertial frame, as well as its orientation
(expressed through the Euler angles) and the rate with which orientation is changed
(expressed through the rates of pitch, roll and yaw). Suppose that a constant desired
height, zT and orientation ψT is given. Based on this information, we wish to construct
a PBVS model for the quadrotor for tracking the GMT and reaching the desired height
and yaw. We understand the above problem as forcing the values |xA − xT |, |yA − yT |,|zA − zT | and |ψA − ψT | to decrease to 0 as time becomes large, starting from any initial
condition.
Based on the above formulation the PBVS navigation algorithm can be split into two
parts. In the first part, visual data is analyzed to obtain information about the target’s
pose. Subsequently, this information is used to create inputs for the control design we
developed in the previous section.
29
Chapter 4. Position Based Tracking Law 30
4.1.1 Estimation of GMT’s Pose
We will begin with the first part, where we use [8] as a basic reference. Since the camera
is assumed to be fixed to the quadrotor, we have that its frame coincides with the body-
fixed frame for the quadrotor, except that its vertical axis points downwards. We let
X = (X, Y, Z) denote the position of the target in the camera frame, and x = (x, y) its
projection in the image as a 2-D point. We havex = X/Z = (u− cu)/fα
y = Y/Z = (v − cv)/f,
where m = (u, v) are the coordinates of the image point in pixel units, a = (cu, cv, f, α)
is the set of intrinsic camera parameters; cu and cv are the coordinates of the principal
point, f is the focal length, and α is the ratio of the pixel dimensions. The camera
geometry is shown in Figure 4.1
Figure 4.1: Camera Perspective Projection Model
In our problem we assumed that the camera parameters are all known, as are u and v.
Thus the quantities X/Z and Y/Z can be readily obtained from the visual measurements.
The next task is to calculate the depth Z. In order to achieve this, we use the information
that our target is constrained to the ground (the plane corresponding to z = 0 in inertial
coordinates). Let CBI denote the rotation matrix from the inertial to the body-fixed
frame (it was given in Section 2.1). Let XI = (XI , YI , ZI) coordinates of the GMT in a
frame, centered at (xA, yA, zA) and with axes parallel to the inertial frame axes. Then
Chapter 4. Position Based Tracking Law 31
we have that
ZI = −zA and [X, Y,−Z]T = CBI ·XI,
where we used the fact that the camera and quadrotor have opposite vertical orientation.
Dividing both sides by −Z and multiplying by C−1BI we see that
C−1BI [X/Z, Y/Z, 1]T =
1
−Z[XI , YI ,−zA]T .
The left hand-side of the above equation is known from the visual measurements and the
orientation of the quadrotor, and hence so is the right hand-side. In particular, we know
the value zA/Z and since zA is also known, we conclude that the depth Z can be mea-
sured in our problem. Finally, since X/Z, Y/Z and Z are all known we can reconstruct
the camera frame position of the target (X, Y, Z) from visual data and knowledge of the
camera orientation and position. The latter can now be transformed via C−1BI to obtain
the relative position of the target with respect to the quadrotor in the inertial frame. As
the quadrotor’s position is also known, we conclude that the target’s inertial position can
be reconstructed.
As will be seen later, the above procedure can be readily implemented to accurately
measure the target’s position, when there is no noise in the image. This thesis will not
consider the problem of denoising the image, although it could be addressed using var-
ious filtering techniques, like the Kalman filter. Instead, we will try and obtain further
information about the target’s pose from the image data. In particular, we would like to
obtain estimates on its velocity. We do the latter, by directly using the measured posi-
tions. Specifically, if at time t1 the position of the target in inertial frame was calculated
to be (xT (t1), yT (t1)) and at time t2 > t1 it is (xT (t2), yT (t2)), then an approximation for
the target’s velocity is given by
vx ≈xT (t2)− xT (t1)
t2 − t1vy ≈
yT (t2)− yT (t1)
t2 − t1.
If the next measurement is made at time t3 then the target is assumed to move with
constant velocity, given by (vx, vy) in the interval (t2, t3). We will see that the latter pro-
vides a good estimate on the target’s velocity so long as it does not change too quickly.
We could try and iterate the above procedure to calculate higher order derivatives of
the GMT’s motion, however we run into some issues. In particular, we have that each
measurement (even if there is no noise) comes with a rounding error, which comes from
Chapter 4. Position Based Tracking Law 32
the picture being pixelized. This error is small and does not lead to big errors when one
calculates the position or velocity. If however, one tries to calculate higher derivatives,
then this rounding error becomes sizable and leads to considerable differences between
the estimated and real values, especially when the time steps of the algorithm are small.
We will thus restrict our pose estimation for the target to finding its horizontal position
and velocities. The latter problem will be further discussed in Chapter 6.
4.1.2 Control Law
Once we have approximations for the target’s position and velocity we can feed those
values into the controller described in Section 2. We remark that in that section we
assumed that we knew not only xT , yT , xT , yT , but also three higher derivatives for the
target’s motion. As already explained, finding estimates for the latter is difficult, and
practically not feasible. Thus we will simply set those values to 0, whenever they are not
available.
The particular values we choose for the PID controllers are as follows (see Section 2
for notation):
Table 4.1: PID controller gains
C.C. P(x)d I(x)d D(x)d P(y)d I(y)d D(y)d Pθd Iθd Dθd Pφd Iφd Dφd
2 0 0 2 0 0 30 0 5 30 0 5C.C. P(θ)d
I(θ)dD(θ)d
P(φ)dI(φ)d
D(φ)dP(z)d I(z)d D(z)d P(ψ)d
I(ψ)dD(ψ)d
1 0 0 1 0 0 0.5 0 0.05 10 0.1 0.5
Also we have λz = λψ = 1, and λx = λy = 2. We remark that not all PID controllers
are proportional. Specifically we have a derivative term for controlling the roll and
pitch rates. The latter was introduced to reduce the overshoot for that particular PID
controller. In addition, we added a small integral term to the ψ controller, to reduce the
stationary error to 0. The proposed controller is analyzed in the next subsection in a
variety of cases.
4.2 Simulations and Results
We now turn to examining the proposed model in four different situations. In what
follows, we will examine four different implementations. Specifically, we will consider
Chapter 4. Position Based Tracking Law 33
the full and approximation model when the target pose is estimated from visual data
as described in Section 4.1 above and when all relevant (so up to and including the
fourth) derivatives of the target’s motion are directly given (i.e. no visual measurement
is performed). We have several goals.
1. Demonstrate that the approximation and full model behave similarly.
2. Demonstrate that the approximation and full model are capable of target tracking
when all information on the GMT motion is provided.
3. Demonstrate that even if partial information (i.e. only horizontal position and ve-
locity) for the GMT is known the control law still ensures successful target tracking.
For brevity we denote the four models as: PBC - the full model with all derivatives given,
PBCL - the approximation model with all derivatives given, PBVS - the full model with
visual measurements, PBVSL - the approximation model with visual measurements.
4.2.1 Stationary Target
We start with the case when the target is stationary, with position (10, 15) and we
want to reach the desired height of 20m and yaw angle π/4 rad, from the initial state
rA = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.2 and 4.3.
(a) Position in X. (b) Position in Y.
Figure 4.2: Horizontal position of the four models (Stationary target).
As can be seen from the above figures, there is hardly any difference between the behav-
ior of the four models. Moreover, they adequately satisfy the conditions of the problem,
Chapter 4. Position Based Tracking Law 34
(a) Position in Z. (b) Yaw.
Figure 4.3: Height and yaw of the four models (Stationary target).
reaching the desired horizontal position in about 10 seconds and the desired height in
about 20. The desired yaw is reached in about 5 seconds, and we see that there is very
little overshoot for any of the four measured quantities.
Finally, we present a measure of the thrust for the four models. We recall that an
important assumption about our approximation model is that the thrust should be close
to mg. As Figure 4.4a shows the latter is indeed true. Essentially, the thrust changes
initially to create upward velocity to reach the desired height, but very quickly goes
back to mg. After the quadrotor reaches the desired height around the 15-th second, the
velocity is decreased (the thrust becomes less than mg) and afterwards the thrust quickly
stabilizes. The latter behavior is a result from the saturation properties of the function
h we mentioned near the end of Section 2.3. Indeed, in Figure 4.4b we performed a
simulation where h(x) = x1+|x| from the z-control is replaced by x, and as we can see the
change in thrust is much more dramatic.
Chapter 4. Position Based Tracking Law 35
(a) Thrust with function h. (b) Thrust with x.
Figure 4.4: Comparison of thrust.
Chapter 4. Position Based Tracking Law 36
4.2.2 Constant Velocity
We consider the case when the target moves with constant velocity, starting from posi-
tion (0, 0). The horizontal velocity is given by (2, 3) and we also want to reach the desired
height of 20 m and yaw angle π/4 rad, from the initial state rA = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
The results are presented in Figures 4.5 and 4.6.
(a) Position in X. (b) Position in Y.
Figure 4.5: Horizontal position of the four models (Constant velocity).
(a) Position in Z. (b) Yaw.
Figure 4.6: Height and yaw of the four models (Constant velocity).
As can be seen from the above figures, there is hardly any difference between the
Chapter 4. Position Based Tracking Law 37
behavior of the four models. Moreover, they adequately satisfy the conditions of the
problem, overcoming the target’s velocity in about 2 seconds and reaching the desired
height in about 20. The desired yaw is reached in about 5 seconds, and we see that there
is very little overshoot for any of the four measured quantities.
4.2.3 Constant Acceleration
We consider the case when the target moves with constant acceleration, starting from
rest in position (0, 0) m. The horizontal acceleration is given by (1, 2) m/s2 and we also
want to reach the desired height of 10 m and yaw angle π rad, from the initial state
rA = [−10,−10, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.7 and
4.8.
(a) Position in X. (b) Position in Y.
Figure 4.7: Horizontal position of the four models (Constant acceleration).
As can be seen from the above figures, there is hardly any difference between the
behavior of the four models. However, one can see that the models where the GMT’s
acceleration is directly fed into the system converge faster. The latter is expected, how-
ever, it is important to observe that even the models where only the GMT’s position and
velocity are measured manage to successfully track the target. All the models adequately
satisfy the conditions of the problem, overcoming the target’s, displacement, velocity and
acceleration in about 10 seconds and reaching the desired height in about 20. The desired
yaw is reached in about 5 seconds, and we see that there is very little overshoot for any
Chapter 4. Position Based Tracking Law 38
(a) Position in Z. (b) Yaw.
Figure 4.8: Height and yaw of the four models (Constant acceleration).
of the four measured quantities.
Chapter 4. Position Based Tracking Law 39
4.2.4 Circular Motion
We finally consider the case when the target moves with constant velocity of 4 m/s around
a circle, centered at (0, 0) m and with radius 4 m. We also want to reach the desired
height of 10 m and yaw angle π/3 rad, from the initial state
rA = [−10,−10, 20, 0.3, 0.2, 0.8, 0, 0, 0, 0, 0, 0, 0]. The results are presented in Figures 4.9
and 4.10.
(a) Position in X. (b) Position in Y.
Figure 4.9: Horizontal position of the four models (Circular motion).
As can be seen from the above figures, there is a significant difference between the
behavior of the four models. One can see that the models where the GMT’s higher or-
der position derivatives are directly fed into the system converge to the target, making
displacement eventually 0. The models where only velocity and position are measured
fail to make the displacement go to 0, however they still remain at a finite distance from
the target. All the models manage to reach the desired height and yaw angle, although
we see that the full model’s yaw oscillates around the target yaw. The latter can be
explained with the fact that in the full model, the yaw rate depends on the pitch, roll
and their derivatives. We no longer have the small angle approximation, in fact θ and
φ both oscillate between −20 and 20 degrees, and their rates between −20 deg/s and
20 deg/s. The latter are of course not small, which leads to this behavior. Finally, we
observe that there is a significant difference between the full and approximation models,
which is again due to the small angle condition failing in this case. We present a bird’s
eye view of the horizontal positions for the four models in Figure 4.2.4, where the differ-
ences are especially evident.
Chapter 4. Position Based Tracking Law 40
(a) Position in Z. (b) Yaw.
Figure 4.10: Height and yaw of the four models (Circular motion).
Chapter 4. Position Based Tracking Law 41
Figure 4.11: Bird’s eye view of horizontal position.
4.3 Summary
This section presented a new position based model for target tracking, based on our
earlier control algorithm. The model enables a quadrotor to achieve a target height and
yaw, while successfully following a GMT. The position based model was implemented in
the setting of visual servoing, when only the target’s position and velocity are measured
from visual data. The performance of the tracking algorithm was validated through
numerical simulations for a large variety of situations.
Chapter 5
Image-Based Visual Servoing
5.1 Introduction
This chapter presents several image-based visual servoing (IBVS) tracking algorithms
specifically developed for quadrotor UAVs. These algorithms are inspired by the classical
visual servo control theory originally designed for fully actuated six degree of freedom
robotic systems [8], [9], [18]. The proposed algorithms use visual measurements provided
by an onboard camera to guide the quadrotor UAV to complete a desired task which
includes tracking both stationary and moving targets. Two of the tracking laws use
only image coordinates in the feedback loop without reconstructing any information in
Cartesian space about the target motion. The third law uses a combination of visual
measurements and an estimation of the target velocity in the x and y direction from
known image features. The performance of all IBVS tracking algorithms is tested through
numerical simulations.
5.2 Camera Model and Image Plane Dynamics
The camera model considered in this problem is based on the perspective projection
model described in Chapter 4, Section 4.1. When the quadrotor is moving with trans-
lational velocity vtc=[vx vy vz
]Tand rotational velocity vrc =
[ωx ωy ωz
]T, the
dynamics of Pc with respect to the camera frame are given by:
Xc = −vx − ωyZc + ωzYc
Yc = −vy − ωzXc + ωxZc
42
Chapter 5. Image-Based Visual Servoing 43
Zc = −vz − ωxYc + ωyXc
To obtain the dynamics in the image plane, we first take the time derivative of the
projection equations:
x = Xc/Zc −XcZc/Z2c = (Xc − xZc)/Zc
y = Yc/Zc − YcZc/Z2c = (Yc − yZc)/Zc
Substituting the relations for X, Y , and Z into the time derivative of the projection
equations gives:
x = −vx/Zc + xvz/Zc + xyωx − (1 + x2)ωy + yωz
y = −vy/Zc + yvz/Zc + (1 + y2)ωyx− xyωy − xωz
Let s =[x y
]Tand vc =
[vx vy vz ωx ωy ωz
]T. Then the relationship between
the time variation of the image features s and the camera velocity can be expressed in
vector form:
s = Levc
where Le is the interaction matrix:
Le =
[−1/Zc 0 x/Zc xy −(1 + x2) y
0 −1/Zc y/Zc 1 + y2 −xy −x
]
The relationship between the dynamics of the point in the image plane and the cam-
era velocity is a key component in the construction of the tracking control law for the
quadrotor UAV which is described in detail in the next section.
5.3 Classical IBVS Control Design for the Quadrotor
5.3.1 Control Law in Image Space
The main objective of the classical visual servoing control scheme is to minimize the
error between the current image coordinates of the observed target and the desired image
coordinates:
e(t) = s(t)− s∗
Chapter 5. Image-Based Visual Servoing 44
In this problem the target is modeled as a square in the image plane and four points
of interest are considered when constructing the image feature vector such that s =[x1 y1 . . . x4 y4
]Tcontains the set of image coordinates for each vertex of the square.
Similarly, the vector s∗ =[x∗1 y∗1 . . . x∗4 y∗4
]contains the desired image coordinates
of the square vertices. An example of the image plane view of the target at two different
quadrotor poses is shown Figures 5.1, 5.2.
Figure 5.1: Desired Target View in theImage Plane
Figure 5.2: Initial Target View in theImage Plane
In order to ensure that the error is decaying to zero exponentially fast, we require
that the error vector satisfies the following differential equation:
e = −λe
Using the relationship between the camera spatial velocity derived earlier, s = Levc,
where Le ∈ R8×6 is formed by stacking the interaction matrices associated with each of
the four points of interest of the target, we obtain:
Levc = −λe
vc =[vcx vcy vcz p q r
]T= −λLe
+e
where L+e = (Le
TLe)−1Le
T is the Moore-Penrose pseudo-inverse of Le. This method
provides a way to calculate the reference velocity expressed in the camera frame that
would drive the system to the desired state if the observed object is stationary. The
classical IBVS method assumes that we can control both the translational and rotational
Chapter 5. Image-Based Visual Servoing 45
motions of the camera. However, for the underactated quadrotor UAV only four of these
states are directly controlled. In particular, as outlined in the quadrotor control section,
we can provide a reference signal for the inertial velocities in X, Y, Z direction and the yaw
rate, ψ. The quadrotor states θ, φ are not directly controlled and the reference signals
for these states are derived from the reference signals for the x, and y inertial velocity
components by passing the desired signal through several PID feedback loops. Since the
IBVS generates the desired velocity in the body-fixed frame, the translational components
are transformed to the inertial frame and the angular velocities are transformed to Euler
angular rates as follows.[Xref Yref Zref
]T= CBI
[vcx vcy vcz
]T[φref θref ψref
]T= C2
[p q r
]TThe final reference signal for the controlled quadrotor states is
[Xref Yref Zref ψref
].
A block diagram of the closed loop dynamics is presented in Figure 5.3.
Figure 5.3: IBVS Block Diagram
Chapter 5. Image-Based Visual Servoing 46
5.3.2 Moving Target
When the observed target is moving, the equation for the relationship between the time
variation of the error between current and desired image coordinates and the camera
velocity is modified to take into account the unknown target motion:
e = s = Levc +∂e
∂t
Setting e = −λe for an exponential decay of the error, the new control law becomes:
vc = −λLe+e− Le
+ ∂e
∂t
where ∂e∂t
is an estimation of ∂e∂t
which can be obtained using the error and velocity
components from the previous time step.
∂e
∂t= (e(t)− e(t−∆t))/∆t− Levc(t−∆t)
In general, the motion compensation term might be omitted if a larger control gain, λ
is used. However, for the quadrotor system, a large gain leads to undesirable system
response with large oscillations and adding motion compensation term is preferred.
5.4 IBVS with Virtual Camera
In this section, we describe an algorithm based on applying ideas from the previous section
to a virtual camera. The latter modification aims to compensate the underactuated
property of the UAV, and improve the tracking capabilities of the algorithm.
5.4.1 Classical IBVS with Virtual Camera
The classical IBVS approach is designed for controlling 6 degrees of freedom and for tasks
in which the controlled system must adjust both its position and orientation with respect
to an observed object. Since moving target tracking applications are the primary focus
of this project, the quadrotor UAV is only required to adjust its position to complete the
desired task. Ideally, in target tracking tasks the same orientation must be maintained
and the generated reference signals for the angular velocities of the camera are zero.
However, for a quadrotor UAV the motion in the x and y direction is associated with
changes in the pitch and roll angle. The tilting of the quadrotor leads to several problems
Chapter 5. Image-Based Visual Servoing 47
which have not been addressed for moving target tracking. The first problem is that the
tilting might cause the target to disappear from the field of view of the camera. The
proposed solution is discussed in detail in Chapter 6. The second problem is that tilting
changes the orientation of the target with respect to the quadrotor which is interpreted
as error in image space. The IBVS control method would correct this error by generating
a counteracting signal for the angular velocity components of the camera. However, due
to the underactuated property the quadrotor is unable to match these signals and only
responds to the changes in the translational velocity. Since the uncorrected roll and pitch
angles continue to increase, the error in the image becomes larger. The IBVS control
compensates for this error by generating larger reference velocities. In the stationary tar-
get case these undesirable effects become significant when the quadrotor’s initial position
is far from the target. In the moving target case, the mismatch between the actual and
desired angular rates leads to oscillations in the quadrotor position.
To avoid the undesirable effects associated with the quadrotor tilt in the classical
IBVS control, a modification of the method is proposed that is based on virtual image
measurements. The control law for the three velocity components of the quadrotor and
the yaw rate is obtained using virtual image measurements taken from a virtual camera
that is aligned with the real camera but is not free to rotate. In particular, the virtual
camera has the same position as the real camera, but is oriented to always point down-
wards (so that its roll and pitch are both 0 for all time and its yaw is the same as that
of the quadrotor). The stable point of the IBVS algorithm corresponds to the quadrotor
hovering at some desired height directly above the target. However, if the target is mov-
ing (especially with some non-zero acceleration) the quadrotor is unable to hover above
the target, since to maintain its horizontal speed it requires non-zero roll and pitch. The
virtual camera, however is capable of doing this, since its orientation is decoupled from
that of the quadrotor. In addition, obtaining the stable point for the virtual camera
means that the real camera, and hence the quadrotor, is directly above the target, which
agrees with our target tracking task. The controller equation is given by:
vc = −λLev+ev − Lev
+ˆ∂ev∂t
where the error ev is the difference between virtual image coordinates and desired image
coordinates and the image Jacobian Lev contains the virtual image coordinates of the
target. The virtual image coordinates[xv yv
]Tare derived from the real coordinates
Chapter 5. Image-Based Visual Servoing 48
[x y
]Tas follows:
Xc = xZc
Yc = yZc
where[Xc Yc Zc
]Tdenote the 3D coordinates of the target in the camera frame.
Xcv
Ycv
Zcv
=
cosψ cosψ sin θ sinφ− sinψ cosφ cosψ sin θ cosφ+ sinψ sinφ
sinψ cos θ sinψ sin θ sinφ+ cosψ cosφ sinψ sin θ cosφ− cosψ sinφ
− sin θ cos θ sinφ cos θ cosφ
Xc
Yc
Zc
[xv
yv
]=
[Xcv/Zcv
Ycv/Zcv
]
5.4.2 IBVS with GMT Velocity Estimation.
In this subsection, we propose a hybrid model for the virtual camera, which uses some
of the features of the IBVS method described above and the PBVS method developed
in Chapter 4. The latter model is used to improve the tracking capabilities of the IBVS
model for a moving target. In particular, the decay of the error between the position of
the target and the position of the quadrotor is realized in image space using the IBVS
scheme while the motion compensation term is based on an estimation of the real tar-
get velocity reconstructed from 3D inertial coordinates. This control law is also based
on the virtual camera model and is a combination between the PBVS and IBVS. The
estimation of the GMT inertial velocity components vx, vy is described in Section 4.1.
The process of obtaining the final reference signals for the quadrotor velocity is as follows:
1. Obtain velocity reference in the camera frame from IBVS virtual camera model
vc =[vcx vcy vcz p q r
]T= −λLevev where Lev and ev are the virtual interaction
matrix and the error between virtual and desired image coordinates respectively
2. Transform translational velocities to inertial frame[Xref Yref Zref
]T= CBI
[vcx vcy vcz
]T3. Transform camera angular velocities to Euler angular rates
[φref θref ψref
]T=
C2
[p q r
]T
Chapter 5. Image-Based Visual Servoing 49
4. Quadrotor final velocity references are: Xref + vx, Yref + vy, Zref , ψref .
5.5 Simulations and Results
In this section we perform numerical experiments for the proposed models. In particular,
we first compare the Classical IBVS model with our proposed model, to demonstrate the
effect of replacing the real camera measurements with virtual ones. We then examine
the performance of the hybrid model for a moving target. For brevity we denote the
three models as: CIBVS - the Classical IBVS model, VIBVS - the virtual camera model,
Hybrid - the hybrid model. The particular values we choose for the PID controllers is as
described in Section 4.1.2.
5.5.1 Stationary Target
We start with the case when the target is stationary, with position (0, 0) and the quadro-
tor starts from the initial position rA = [−1,−2, 20, π/12,−π/24,−π/6, 0, 0, 0, 0, 0, 0, 0].
The initial and desired image for the target are given in Figure 5.4. We remark that
the desired image is taken from height 10 m directly above the target with 0 pitch, yaw
and roll. We will compare the CIBVS and VIBVS models. The results are presented in
Figures 5.5, 5.6 and 5.7. From the figures we can see that both algorithms converge to
(a) Initial image. (b) Desired image.
Figure 5.4: Initial and desired images (Stationary target).
Chapter 5. Image-Based Visual Servoing 50
(a) Position in X. (b) Position in Y.
Figure 5.5: Horizontal position of the models (Stationary target).
(a) Position in Z. (b) Yaw.
Figure 5.6: Height and yaw of the models (Stationary target).
the desired value, however the VIBVS model is much better behaved than the CIBVS. To
understand this difference we can look at the error function for one of the points, given
in Figure 5.8. As can be seen from Figure 5.8 the error for the CIBVS model (in red)
is much more erratic, the reason being that the model interprets changes in pitch and
roll as errors in the image. On the other hand, the since pitch and roll are discounted
in the VIBVS model, we see that the error is much better behaved. This leads to a
smoother signal for the velocities, which in turn improves the performance of the algo-
Chapter 5. Image-Based Visual Servoing 51
(a) Pitch. (b) Roll.
Figure 5.7: Pitch and roll of the models (Stationary target).
Figure 5.8: Error for the models (Stationary target).
rithm. Nevertheless, we remark that both algorithms reach the desired image, although
in both cases the target leaves the field of vision. In the VIBVS case it only does so for a
brief period, while for the CIBVS the target is not in the FOV for a considerable portion
of the simulation. The trajectory of the target in the FOV for the two models is given
in Figure 5.9.
Chapter 5. Image-Based Visual Servoing 52
(a) Trajectory for VIBVS. (b) Trajectory for CIBVS.
Figure 5.9: Trajectory of the target for the models (Stationary target).
5.5.2 Moving Target
We consider the case when the target starts from position (0, 0) and the quadrotor starts
from the initial position rA = [0, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]. The target moves with
constant velocity (1, 0.5) m/s for the first 15 seconds and then makes a sharp turn,
moving with the constant velocity (2,−0.5) m/s for the remainder of the simulation.
The initial and desired image for the target are given in Figure 5.10. We remark that
the desired image is taken from height 10 m directly above the target with 0 pitch, yaw
and roll. We will compare the CIBVS and Hybrid models. The results are presented in
Figures 5.11, 5.12 and 5.13. As can be seen from the figures the behavior of the two
models is very similar and they both achieve tracking of the target. The Hybrid model
appears to react slightly better to changes in the velocity and also converges slightly
faster to the target’s position.
Chapter 5. Image-Based Visual Servoing 53
(a) Initial image. (b) Desired image.
Figure 5.10: Initial and desired images (Moving target).
(a) Position in X. (b) Position in Y.
Figure 5.11: Horizontal position of the models (Moving target).
Chapter 5. Image-Based Visual Servoing 54
(a) Position in Z. (b) Yaw.
Figure 5.12: Height and yaw of the models (Moving target).
(a) Pitch. (b) Roll.
Figure 5.13: Pitch and roll of the models (Moving target).
Chapter 5. Image-Based Visual Servoing 55
5.6 Summary
This chapter presented several new image based visual servoing models for target track-
ing, based on our earlier control algorithm. The models enable a quadrotor to successfully
follow a GMT. The new models are inspired by the Classical IBVS model, but some are
modified to use a virtual camera, which improves the behavior. The tracking capa-
bilities can be further improved by using visual measurements to measure higher order
derivatives of the target’s motion. In particular, a Hybrid model was proposed, which ad-
ditionally measures the target’s horizontal velocity using visual data. The performance
of the different IBVS models was validated through numerical simulations in different
situations.
Chapter 6
Field of View Challenges
6.1 Introduction
In this chapter we consider the problem of keeping the target inside the field of view
(FOV) of the on-board camera, and handling cases when the target is not inside the
FOV. Since all of the image based models that we have developed depend on visual mea-
surements, it is imperative to consider ways to handle this problem. The loss of vision
of the target can come from a variety of sources in the problem we are considering. We
split the latter into three categories: outside sources, model sources and target sources.
Outside sources refers to problems that could arise with the camera (malfunctions) or
with sudden changes of the environment (sudden winds). Both of these can result in a
temporary loss of the field of vision, even though the quarotor was doing everything to
adequately track the GMT. Model sources refers to the particular setup we have consid-
ered for our problem. In particular, in our setting the camera is fixed to the quadrotor
and cannot change its relative orientation. Since the quadrotor changes its pitch and
roll angles to change its velocity, if those quantities become large the camera will expe-
rience a significant tilt, which might result in the target leaving the FOV. Finally, target
sources refers to the case when the target leaves the FOV because of its movement and
the inability of the UAV to keep up with the GMT’s motion. Depending on the source,
we propose different strategies for keeping the target inside the FOV.
If the target manages to leave the FOV, we propose a way for restoring vision of the
GMT. In particular, we develop an approach that relies on dead reckoning, in which case
the quadrotor uses the last available information on the target to calculate its position
and uses that information to continue the tracking. More complicated approaches involve
increasing the quadrotor’s height, which results in an increased FOV. The latter is a better
56
Chapter 6. Field of View Challenges 57
strategy when the target’s motion is the main reason vision was lost. Further discussion
of all the methods is presented in the following sections. In what follows, our description
of how the methods work will focus on the case of the PBVS model, described in Chapter
4; however the approaches can be readily adapted to the IBVS models of Chapter 5.
6.2 Keeping Target in FOV
In this section, we consider the problem of maintaining vision of the target. We will
consider the cases when the UAV starts to lose sight of the target due to the model we
have or the target’s motion. In the former case, we develop a strategy for making the
changes in attitude smaller in magnitude, while in the latter case we propose an approach
to increase the FOV and thus keep the GMT in sight.
6.2.1 Managing Attitude of UAV
As described earlier, the horizontal movement of the UAV is connected with changes in
the pitch and roll, which in turn tilt the camera. Even if the target is not moving with
a high velocity, the control that we have developed can lead to short periods when the
pitch and roll are very high (say more than 20 degrees). These short bursts in pitch
and roll are necessary for the UAV to increase its speed to match that of the target and
to adequately respond to sudden changes in the motion of the target. If the on-board
camera has a small angle of view, say 30 degrees, then even if the UAV is directly above
the target having a tilt of more than 15 degrees results in loss of vision.
Signal Saturation
In order to address the above situation, we would like to force the pitch and roll of the
quadrotor to not exceed a certain threshold. We should remark that the pitch and roll of
the quadrotor are related to its acceleration, thus setting upper and lower bounds would
not affect the maximal velocity, but rather the maximal acceleration. In this sense, it
might take longer for the UAV to reach the GMT, but the maneuver is made less aggres-
sive and vision might be maintained.
Unfortunately, in our model we do not control the quadrotor’s pitch and roll directly,
but their second derivatives. Nevertheless, the particular structure of our control law
readily suggests a way to change our system and achieve the desired behavior. Specifically,
we can set thresholds for the desired pitch and roll: θd and φd, and pass those signals
Chapter 6. Field of View Challenges 58
Figure 6.1: Saturation Function.
through a saturation function, before passing them through the next PID controller in
the closed loop nested system. In order to reduce chattering we use a smoothed out
saturation function, which is given by
f(x) =
x if x ∈ [−a, a]
A1(x− b)3 +B1(x− b)2 + C1(x− b) +D1 if x ∈ (a, b)
A2(x− b)3 +B2(x− b)2 + C2(x− b) +D2 if x ∈ (−b,−a)
b if x ≥ b
−b if x ≤ −b,
(6.1)
where one has 0 < a < b are constants and A1 = A2 = −1/(a−b)2, B1 = 2/(a−b) = −B2,
C1 = C2 = 0, D1 = b = −D2. A plot of this function when b = 1 and a = 0.8 is given
in Figure 6.1. We remark that essentially the function is linear in (−a, a), equal to
−b on (−∞,−b) and to b on (b,∞) and in between these intervals it is polynomially
splined to be C1 on all of R. The effect of the proposed saturation on our system is
difficult to analyze directly from the equations. In general, we believe that if the target’s
acceleration is below a certain threshold, one can still successfully track it even if θd and
φd are saturated, however the latter is difficult to prove. We thus verify the performance
of this method only through simulations, given below.
Chapter 6. Field of View Challenges 59
Simulations
We will use the PBVS model for the analysis of the signal saturation method, with the
same PID coefficients as given in Chapter 4. We denote the model with no saturation by
‘PBVS’ and that with saturation with ‘PBVS Sat’ The particular values we choose for the
saturation function are a = 6π/180 and b = 8π/180. We first consider the case when the
target starts from (0, 0) and starts moving with a constant speed of 1 m/s in the positive
x direction. The quadrotor has initial conditions rA = [0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0].
The results are presented in Figures 6.2 and 6.3.
(a) X Position for models. (b) X velocity for models.
Figure 6.2: Position and velocity for models (Signal saturation).
As can be seen in Figure 6.2 the saturation does not prevent the UAV to adequately
track the GMT. We observe that the velocity increases slower, because of the saturation
function, which prevents the model from obtaining high pitch and thus high accelera-
tions. In addition, the slower increase causes the UAV to fall behind the GMT and so a
higher velocity is subsequently needed to overcome the displacement. In Figure 6.3 we
can see that the pitch for the non-saturated model increases to above 15 degrees, which
causes the GMT to leave the FOV (in our case the FOV is given by the box between 0
and 500 pixels in x and y). The latter is mitigated with the saturation, with pitch kept
below 10 degrees, resulting in consistent keeping the target in the camera’s FOV.
We now explain the meaning of the quantities a and b, which we have chosen for
our model. The value of b represents the maximum pitch (or roll) that we allow for our
model, while a represents the magnitude below which the angle is considered acceptable
Chapter 6. Field of View Challenges 60
(a) Pitch for models. (b) Camera trajectories for models.
Figure 6.3: Trajectory of the target for the models (Signal saturation).
and thus not altered. To illustrate the latter we perform a small experiment with the
same setup as above except that the GMT’s velocity is changed to 3 m/s. The result is
given in Figure 6.4.
As can be seen from 6.4 the pitch of PBVS Sat initially overshoots to 10 degrees and
then quickly goes down to 8 degrees, which is the value we set for b. It stays there, until it
reaches the desired velocity, at which point the pitch decreases and stabilizes to 0. This is
the desired behavior that we want. We remark that the proposed method only manages
to keep the target in the FOV if the changes in velocity are small. In addition, if the
target moves with a high acceleration, which necessitates high pitch and roll, saturating
those values would prevent the quadrotor from successfully tracking the GMT.
Chapter 6. Field of View Challenges 61
Figure 6.4: Pitch for two models (Signal saturation).
6.2.2 Increasing FOV of UAV
As described earlier, since we are dealing with a moving target, losing vision can result
from aggressive movements from the GMT and the inability of the UAV to keep up. The
latter is especially true if the pitch and roll of the UAV have been saturated as then
the quadrotor’s movements have been constrained. In order to handle this situation we
consider a method of increasing the FOV of the UAV by increasing its height. We explain
the algorithm below.
Increasing Height
We will consider two heights zref , which is the desired height and zH , which is the height
we wish to reach if the GMT starts to leave the FOV. We assume that zH > zref . Increas-
ing the height, enables the camera to observe a larger area on the ground. In particular
if we assume that camera is at position (x0, y0, z0) pointing downwards and observes a
rectangular area of the form [x0 − a, x0 + a]× [y0 − b, y0 + b] then increasing the height
by a factor of α makes the observed rectangle become [x0−αa, x0 +αa]×[y0−αb, y0 +αb].
In the PBVS model we have as an input a desired height zT , which is the height we
wish to reach and still follow the GMT. The algorithm we propose, changes zT to zref if
the target is well inside the FOV and to zH if the target starts to leave the FOV. Changing
Chapter 6. Field of View Challenges 62
zT to zH causes the thrust of the motors to increase to create upward velocity to reach the
new height. The latter increase has two positive effects. Firstly, it increases the height
and hence the observable ground area as described above. Secondly, it increases the ve-
locity in the horizontal direction of the quadrotor without changing the angles, since for
non-zero pitch and roll the thrust has non-trivial projections on the (x, y) plane. Both
of these effects improve the tracking capabilities of the UAV and improve the chances of
keeping the target in the FOV.
We now describe the algorithm precisely. Let us assume that the camera’s FOV is
given by a box with coordinates Box = [0, A] × [0, B] (in our simulations A = B = 500
pixels) and we have fixed two constants µ, λ ∈ (0, 1) with λ > µ. We also consider two
boxes
Boxλ =
[(1− λ)A
2,(1 + λ)A
2
]×[
(1− λ)B
2,(1 + λ)B
2
]Boxµ =
[(1− µ)A
2,(1 + µ)A
2
]×[
(1− µ)B
2,(1 + µ)B
2
].
The above are boxes with centers coinciding with that of the original box and sides
parallel to the original sides, but with side-length multiplied by λ and µ respectively.
Since 0 < µ < λ < 1 we have the inclusion Boxµ ⊂ Boxλ ⊂ Box. We let the target’s
image coordinates be (u, v); we interpret the target as leaving the FOV if (u, v) 6∈ Boxλ,and we consider the target as safely inside the FOV if (u, v) ∈ Boxµ. The algorithm is
now given as follows:
1. Initialize zT := zref (this indicates that the target is safely in the FOV).
2. If (u, v) 6∈ Boxλ set zT := zH (this indicates that the target is leaving the FOV and
so we increase the height).
3. If (u, v) ∈ Boxµ set zT := zref (this indicates that the target has safely returned to
the FOV and so we can resume the usual tracking).
Numerical simulations of the algorithm are given below.
Simulations
In what follows we will compare the PBVS Sat method from before with the above
method, which we denote by ‘PBVS SatZ’. The parameters of the two models are as
in Section 6.2.1., except that we set λz = 5 (see Chapter 4). The quadrotor has initial
Chapter 6. Field of View Challenges 63
conditions rA = [0, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] and the target starts from (0, 0) and ac-
celerates in the positive x direction for 2 seconds with acceleration 2 m/sec2. After that
the target continues to move with constant velocity of 4 m/sec. zref = 20 m and zH = 40
m, λ = 0.4 and µ = 0.2. The results are presented in Figures 6.5 and 6.6.
(a) X Position for models. (b) X Camera for models.
Figure 6.5: X Position and camera position for models (Changing height algorithm).
As can be seen from Figure 6.5 both algorithms manage to track the target, however by
changing the height the target manages to stay within the FOV. The particular changes
in the height are given in Figure 6.6 and we see that as the target accelerates, it leaves
Boxλ, which causes the target height to increase. The quadrotor starts to move upwards
until it gets the target back in Boxµ. After the quadrotor reaches the desired velocity
it reduces its acceleration, by reducing the pitch. This leads to a tilt, which forces the
target outside of Boxλ again, which is the second spike in the target height. However,
the quadrotor quickly stabilizes its pitch and so the target again enters Boxµ, and stays
there for the remainder of the simulation, while the quadrotor reduces its height to the
target zref . This is the behavior we wanted.
Chapter 6. Field of View Challenges 64
Figure 6.6: Height for models (Changing height algorithm).
6.3 Target Leaving FOV
In this section, we consider the problem of restoring vision of the target. We will consider
the cases when the UAV starts to lose sight of the target due to the model we have or
problems with the camera. In both cases we develop a strategy based on dead reckoning,
but also describe how to combine this idea with the method of increasing the FOV
developed before.
6.3.1 Dead Reckoning
We first describe the algorithm. We suppose that the UAV initially observes the GMT
and can estimate its position and velocity. Suppose that at time t0 the UAV stops
observing the GMT and the last at that time the position and velocity of the target are
estimated to be (x0, y0) and (vx, vy). Then at a future time t1 if vision is not restored in
the meantime our best estimate for the target’s position is (x0+vx(t1−t0), y0+(t1−t0)vy)
and its velocity (vx, vy). I.e. we guess that the target’s velocity would not change by
much and based on this information calculate the position we expect the target to be.
This idea is called dead reckoning. It is especially useful in image based models, since
various algorithms can fail to detect the target even if it is in fact in the FOV. If vision
is quickly restored, this approach smooths out potentially noisy data that the camera
Chapter 6. Field of View Challenges 65
algorithm would pass when it fails to detect the target.
The above approach can actually be used to remedy the inherent problem in our
model, which arises from the rapid attitude changes in the UAV. In particular as discussed
before our model produces high pitch and roll angles for short periods of time, which force
the GMT out of the FOV. These periods of time are about one-two seconds long and
are required for the UAV to accelerate. It is reasonable to assume that the target’s
velocity would not significantly change in that short time and so dead reckoning can be
used to continue passing a signal for the target’s position, even if no such information
is available. This can effectively replace the attitude control method from the previous
section. Finally, this idea can be combined with increasing the height of the UAV, which
would improve its chances of locating the GMT. The proposed methods are validated in
the following subsection through numerical simulations.
6.3.2 Simulations
We first consider the case when visual data is cut from the algorithm at various times.
The GMT starts at position (0, 0) and has constant acceleration in the y direction of 0.5
m/sec2. The quadrotor starts from initial condition rA = [0, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
and has 20 m as desired height. During even seconds, the quadrotor loses vision of the
GMT, so in the intervals (1, 2), (3, 4), (5, 6) etc. the quadrotor uses dead reckoning to es-
timate the position of the target. The latter model is called ‘PBVS DR’ and is compared
with the usual PBVS model, when the signal is not lost. The constants of the algorithms
are the same as those in Chapter 4. The results are presented in Figures 6.7 and 6.8.
As can be seen from the Figures the UAV manages to successfully track the target
and keep it in its FOV. We observe that since dead reckoning assumes that the target
maintains its velocity in periods when visual data is lost, the algorithm underestimates
the velocity during even time intervals. However, once vision is restored the quadrotor
quickly compensates the difference in velocity and position. This is the desired behavior.
Chapter 6. Field of View Challenges 66
(a) Y Position for models. (b) Y velocity for models.
Figure 6.7: Y position and velocity for models (Changing height algorithm).
Figure 6.8: Camera Y position for models (Dead Reckoning).
We next consider the case when visual data is lost because of the target’s motion.
We assume that the GMT starts at position (0, 0) and has constant acceleration in the y
direction of 1 m/sec 2 and constant velocity of 3 m/sec in the x direction. The quadrotor
starts from initial condition rA = [0, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] and has 20 m as desired
height. We implement this set up for the PBVS DR model and the PBVS DRZ model,
Chapter 6. Field of View Challenges 67
where the latter is the same as PBVS DR except that we also increase the height according
to the same setup as in Section 6.2. The constants for the algorithms are the same as in
Section 6.2 above. The results are presented in Figures 6.9, 6.10 and 6.11.
(a) X Position for models. (b) Y Position for models.
Figure 6.9: Horizontal position for models (Dead reckoning and changing height algo-rithm).
(a) X Camera Position for models. (b) Y Camera Position for models.
Figure 6.10: Camera positions for models (Dead reckoning and changing height algo-rithm).
Chapter 6. Field of View Challenges 68
Figure 6.11: Height for models (Dead reckoning and changing height algorithm).
As can be seen from the Figures both algorithms manage to successfully track the
GMT. Even though vision is lost because of the motion of the GMT, it is ultimately
restored through the use of dead reckoning. We remark that the increase in z for the
PBVS DRZ model is only slight and modestly improves the keeping of the target in the
FOV. The above is the behavior we desired.
6.4 Summary
This chapter presented several approaches for keeping the GMT in the FOV and restoring
vision once it is lost. The models enable a quadrotor to successfully follow a moving GMT
improving its ability of maintaining sight. A dead reckoning based model successfully
enables the quadrotor to restore vision even when it is lost due to camera problems or
aggressive target motion. The performance of the different approaches was validated
through numerical simulations in different situations.
Chapter 7
Conclusions
The primary objective of this project is to develop visual servoing algorithms for quadro-
tor UAVs. Both an image-based (IBVS) and a position-based approach (PBVS) are
designed in combination with PID control and tested through numerical simulations in
different target tracking scenarios. The quadrotor is able to track both stationary and
moving targets successfully using the proposed algorithms. The position-based approach
is designed to maintain the roll and pitch angles small to minimize the chances of the
target leaving the FOV of the camera which is fixed to the UAV body. In addition,
the stability of the full closed-loop system is analyzed based on cascade systems theory
and Lyapunov stability theory. When implementing the IBVS method, we compare the
results obtained using the classical visual servoing theory with those from a modified ap-
proach aiming to discount some of the negative effects associated with the underactuated
system dynamics.
The development of the visual servoing algorithms in this thesis is also complemented
with an investigation of various scenarios in which the observed target leaves the FOV
of the camera. Several solutions of this problem are proposed depending on the cause
of losing vision. These methods are based on dead reckoning (where the last available
information of the target state is used to predict future motion) and increasing the al-
titude of the UAV. The computational simulations indicate that the suggested methods
are successful in restoring the field of view of the camera. Possible future work would
be application of the algorithms for target tracking using multiple UAVs or combination
of the visual servoing approach with obstacle avoidance techniques. In addition, the
proposed tracking methods can be implemented and studied experimentally.
69
Appendix A
In this appendix we verify that the cascade system from Chapter 3 satisfies the conditions
of Theorem 1, provided we pick appropriate coefficients for the proportional controllers.
For the reader’s convenience we recall the exact formulation below.
Let ω = (ω1, ω2) ∈ R2, and x = (x1, x2, · · · , x8) ∈ R8. We let s : R2 → R2, be given
by
s(ω1, ω2) = [ω2,−aψω1 − bψω2]T .
Next let f : R10 → R8 be given by
f(x1, x2, · · · , x8, ω1, ω2)i = xi+1 for i = 1, 2, 3, 5, 6, 7 and[f(x1, x2, · · · , x8, ω1, ω2)4
f(x1, x2, · · · , x8, ω1, ω2)8
]= −A
[x4
x8
]− AB
[x3
x7
]− ABC
[x2
x6
]− ABCD
[h(x1)
h(x5)
]+
(ω2
2 + (aψω1 + bψω2 − Aω2)
[0 1
−1 0
])[x2
x6
]+ 2ω2
[0 −1
1 0
][x4
x8
].
Recall that x = (∆x, ∆x, ∆x,...
∆x,∆y, ∆y, ∆y,...∆y), ω = (∆ψ, ∆ψ) is a solution to the
cascade system
x = f(x, ω)
ω = s(ω).
Then we claim that for certain parameters A,B,C,D the above cascade system satisfies
the three conditions in 1.
70
Appendix A. 71
A.1 Stability of Yaw Control
In this subsection we verify the second condition in Theorem 1, i.e. that (0, 0) is a
globally asymptotically stable solution to the differential equation:
ω = s(ω).
We begin with a key Lemma.
Lemma 1. Let a and b be positive constants and consider the system
d
dt
[x
y
]=
[y
−ax− by
].
f Then for any initial condition the solution of the above system converges exponentially
fast to 0. In particular, there exist positive constants D1, D2, r, depending on a and b,
with D1, D2 also depending in addition on the initial conditions (x(0), y(0)), such that
|x(t)| ≤ D1e−rt and |y(t)| ≤ D2e
−rt.
Proof. The system in the proposition is a well-known second order differential equation,
with explicit solutions given as follows (see for example Chapter 2 in [4]). Let D = a2−4b.
1. If D > 0 the solutions to the system are of the form
x(t) = Ae(√D−a)t/2 +Be(−
√D−a)t/2, and y(t) = x(t)
where A and B are constants depending on the initial conditions of the system. This
system clearly converges exponentially fast to 0 as t→∞. Moreover, differentiating
we see that
|x(t)| ≤ D1e−rt and |y(t)| ≤ D2e
−rt
with r = (√D − a)/2 and D1, D2 taken sufficiently large (for example D1 ≥
max(|A|, |B|) and D2 ≥ 2 max(|A|r, |B|(√D + a)/2) work).
2. If D < 0 the solutions of the system are of the form
x(t) = Ae−at/2 cos(√−Dt) +Be−at/2 sin(
√−Dt), and y(t) = x(t)
where A and B are constants depending on the initial conditions of the system. This
system clearly converges exponentially fast to 0 as t→∞. Moreover, differentiating
Appendix A. 72
we see that
|x(t)| ≤ D1e−rt and |y(t)| ≤ D2e
−rt,
with r = a/2 and D1, D2 taken sufficiently large.
3. If D = 0 the solutions of the system are of the form
x(t) = Ae−at/2 +Bte−at/2, and y(t) = x(t)
where A and B are constants depending on the initial conditions of the system. This
system clearly converges exponentially fast to 0 as t→∞. Moreover, differentiating
we see that
|x(t)| ≤ D1e−rt and |y(t)| ≤ D2e
−rt,
with r = a/4 and D1, D2 taken sufficiently large.
We now turn back to our differential equation. We have that
d
dt
[ω1
ω2
]=
[ω2
−aψω1 − bψω2
].
From the above we see that [x, y]T = [ω1, ω2]T is a solution to the differential equation
in Lemma 1 with a = aψ and b = bψ. We thus conclude that ω1 and ω2 converge
exponentially fast to 0. This shows that (0, 0) is indeed a globally asymptotically stable
equilibrium. We conclude that
|ω1(t)| ≤ D1e−rt, |ω2(t)| ≤ D2e
−rt (A.1)
for all t ≥ 0 with some positive constants D1, D2, D3 and r depending on the initial
conditions and aψ, bψ.
A.2 Stability of Tracking Control
In this subsection we verify the first condition of Theorem 1. When (ω1, ω2) = (0, 0) the
equation x = f(x, ω) becomes
f(x1, x2, · · · , x8, 0, 0)i = xi+1 for i = 1, 2, 3, 5, 6, 7 and
Appendix A. 73
[f(x1, x2, · · · , x8, 0, 0)4
f(x1, x2, · · · , x8, 0, 0)8
]= −A
[x4
x8
]− AB
[x3
x7
]− ABC
[x2
x6
]− ABCD
[h(x1)
h(x5)
].
The above shows that the differential system splits into two (equivalent) fourth order
differential equations of the form
....x = −a...
x − bx− cx− dh(x),
where h(x) = x1+|x| and a = A, b = AB, c = ABC, d = ABCD are all positive
constants. The latter is a fourth order non-homogeneous differential equation, whose
global asymptotic stability is established, under suitable conditions of the coefficients, in
the following proposition.
Proposition 1. Suppose we have the following fourth order differential equation:
....x = −a...
x − bx− cx− dh(x),
where h(x) = x1+|x| . Then the above differential equation admits a unique solution for any
initial condition. If in addition the constants a, b, c, d satisfy the following conditions:
1. a, b, c, d are all positive,
2. ab− c > 0, (ab−c)ca2
> d,
3. 4c ab−c4a2+ac
> d,
4. dbc− d
a− ad2
c2− c2 > 0, b
a− d
c− c+1
a2> 0
then the solutions to the differential equation are globally asymptotically stable, and
(x(t), x(t), x(t),...x (t))→ (0, 0, 0, 0) as t→∞ for any initial condition (x(0), x(0), x(0),
...x (0)).
Remark 1: We mention here that, although there is a sizable literature on the
stability of non-linear differential equations, we were unable to find a result
encompassing the function h we have chosen. In particular, the methodologies for
proving stability for a non-linear fourth order differential equation of the form
....x = −a...
x − by − cx− h(x),
appear to rely on h being unbounded [33] or behaving like dx in all of R [16], [19]. The
function that we consider fails to satisfy either of these conditions, although near 0 it
does behave like dx. As general results do not apply we have to develop a particular
Appendix A. 74
proof for the case we are considering, which is presented below. We hope that the fol-
lowing solution can be generalized and used to prove stability for a much larger class of
non-homogeneous differential equations than the ones we are considering.
Proof. For brevity we let y = x, z = x, w =...x . Then the fourth order differential
equation we have becomes equivalent to
x = y, y = z, z = w, w = −aw − bz − cy − dh(x), .
The first part of the proposition is immediate. Indeed, one has that h is a differentiable
function, with derivative h′(x) = 1(1+|x|)2 , which is uniformly bounded by 1. Hence the
standard Lipschitz conditions hold and the system has a unique solution for any ini-
tial condition by the Existence-Uniqueness Theorem (see [4]). The stability part of the
proposition is the main result to prove, to which we now turn.
We consider the following Lyapunov function candidate:
V (x, y, z, w) = 2βd(|x|− log(1+ |x|))+(βb−αd+c)y2 +(αb+a−β)z2 +αw2 +2dxy
1 + |x|+
+2αdxz
1 + |x|+ yz(2αc+ 2βa) + 2βyw + 2zw + δ
xw
1 + |x|,
where α = 1a
+ ε β = dc
+ ε, and ε, δ > 0 are small positive constants to be chosen later,
with δ much smaller than ε.
In order to demonstrate that the solutions of the differential equation are globally asymp-
totically stable it is sufficient to establish the following properties for V (see Theorem
3.2 in [20]):
1. V (x, y, z, w) ≥ 0 with equality if and only if x = y = z = w = 0.
2. V (x, y, z, w) ≤ 0 with equality if and only if x = y = z = w = 0.
3. V (x, y, z, w) → ∞ if and only if |x| + |y| + |z| + |w| → ∞ (i.e. V is radially
unbounded).
We first turn to proving the second statement. We observe that the function |x| −log(1 + |x|) is differentiable with derivative x
1+|x| , and from above the latter is also dif-
ferentiable with derivative 1(1+|x|)2 . Using these facts together with the chain rule we
Appendix A. 75
get
V (x, y, z, w) = 2βdx
1 + |x|x+ 2(βb− αd+ c)yy + 2(αb+ a− β)zz + 2αww
+2dxy
1 + |x|+2d
yx
(1 + |x|)2+2αd
xz
1 + |x|+2αd
xz
(1 + |x|)2+ yz(2αc+2βa)+yz(2αc+2βa)+
+2βyw + 2βyw + 2zw + 2zw + δxw
(1 + |x|)2+ δ
xw
1 + |x|.
Using that x = y, y = z, z = w and w = −aw − bz − cy − dh(x) the above (after
significant cancellation) becomes
V (x, y, z, w) = y2(2d
(1 + |x|)2−2βc)+z2(2αc+2βa−2b)+w2(2−2αa)+2αdyz(
1
(1 + |x|)2−1)+
−δd x2
(1 + |x|)2− δc xy
1 + |x|− δb xz
1 + |x|− δa xw
1 + |x|+ δ
yw
(1 + |x|)2.
Using that α = 1/a+ ε and β = dc
+ ε the above can be rewritten as
V (x, y, z, w) = −
[√δd
2
x
1 + |x|+
√δ
dcy
]2
−
[√δd
2
x
1 + |x|+
√δ
dbz
]2
−
−
[√δd
2
x
1 + |x|+
√δ
daw
]2
−[S(x)y − 1
2S(x)z
]2
−
−y2
[2cε− δ
2
1
(1 + |x|)2− δ
dc2
]− z2
[L− ε(a+ c)− δ
db2 − 1
4S(x)2
]−
−w2
[2aε− δ
da2 − δ
2
1
(1 + |x|)2
]− δd
4
x2
(1 + |x|)2,
where L = 2abc−c2−da2ac
and S(x) =√
2d− 2d(1+|x|)2 . The above equality will clearly show
that V ≤ 0, provided that the expressions in front of −y2, −z2 and −w2 are strictly
positive for all x, i.e. we want to pick ε, δ > 0 such that
A(x) = 2cε− δ
2
1
(1 + |x|)2− δ
dc2 > 0, B(x) = L− ε(a+ c)− δ
db2 − 1
4S(x)2 > 0,
C(x) = 2aε− δ
da2 − δ
2
1
(1 + |x|)2> 0.
We observe that S(x)2 ≤ 2d, and also L > 2d is equivalent to condition 3. in the
Appendix A. 76
statement of the proposition. Thus if ε and δ are chosen sufficiently small we will have
B(x) ≥ L−2d2
for all x. Also it is clear that if δ is much smaller than ε then A(x) > cε
and C(x) > aε. Thus we conclude that for such chosen δ and ε one has
V ≤ −c1y2 − c2z
2 − c3w2 − c4
x2
(1 + |x|)2,
for some positive ci i = 1, 2, 3, 4. This proves the second statement.
We next turn to the first statement. We can rearrange the terms in V (x, y, z, w) to
obtain
V (x, y, z, w) =2d2
c(|x| − log(1 + |x|)) + 2εd(|x| − log(1 + |x|))+
+y2
[db
c− d
a+ c+ ε(b− d)− aβ2
]+ z2
[b
a− d
c+ ε(b− 1)
]+ εw2+
2dxy
1 + |x|+ 2dα
xz
1 + |x|+ 2αcyz + δw
x
1 + |x|+ (
1√aw +√az +
√aβy)2.
The above can now be rewritten as
V (x, y, z, w) =
[d2
c(|x| − log(1 + |x|)) + 2d
xy
1 + |x|+ cy2
]+
+
[d2
c(|x| − log(1 + |x|)) + 2dα
xz
1 + |x|+ cα2y2
]+
[√δ(|x| − log(1 + |x|)) + δw
x
1 + |x|+√δw2
]+
[cy + αz]2 +
[1√aw +√az +
√aβy
]2
+ (ε− δ)w2 + (2εd− δ)(|x| − log(1 + |x|))+
y2
[db
c− d
a+ c+ ε(b− d)− aβ2 − c− c2
]+ z2
[b
a− d
c+ ε(b− 1)− α2c− α2
]A key result, whose proof we isolate in Lemma 2 after the proposition is the following.
Let A,B,C be constants with A,B > 0. If 4AB ≥ C2 then we have that
A(|x| − log(1 + |x|)) +By2 + Cxy
1 + |x|≥ 0,
with equality if and only if x = y = 0. The latter implies that the first three summands
above are non-negative. Also the next two are clearly non-negative as well. We thus
obtain that
V (x, y, z, w) ≥ (ε− δ)w2 + (2εd− δ)(|x| − log(1 + |x|))+
Appendix A. 77
y2
[db
c− d
a+ c+ ε(b− d)− aβ2 − c− c2
]+ z2
[b
a− d
c+ ε(b− 1)− α2c− α2
].
Condition 4. in the Proposition now implies that we can pick ε sufficiently small so that
the coefficients in front of y2 and z2 above are strictly positive. Then by picking δ much
smaller than ε we can ensure that for some positive constants c1, c2, c3, c4 one has
V (x, y, z, w) ≥ c1f(x) + c2y2 + c3z
2 + c4w2,
where f(x) = |x| − log(1 + |x|). Observe that f ≥ 0 with equality if and only if x = 0
and f(x)→∞ if and only if |x| → ∞. This clearly shows that V ≥ 0 and if V = 0 one
concludes that x = y = z = w = 0. The converse is immediately verified by substituting
in the original formula. This proves the first statement. The third statement now follows
from the above inequality, and the properties of f listed above. This suffices for the
proof.
The main consequence of the above work is that if we pick A,B,C,D so that a = A,
b = AB, c = ABC, d = ABCD satisfy the conditions of Proposition 1, then 0 is a
globally asymptotically stable equilibrium for the system x = f(x, 0). This proves that
the first condition of Theorem 1 is indeed satisfied.
Remark 2: We remark that the first two conditions in Proposition 1 are known as the
Routh-Hurwitz criteria [2]. They are the necessary and sufficient conditions for stability
of solutions to a constant coefficient fourth order differential equation of the form
....x = −a...
x − by − cx− dx.
Consequently, their appearance in Proposition 1 is expected. The last two conditions are
more specific and come from the particular Lyapunov function we considered. We expect
that these conditions can be relaxed or even removed, without changing the conclusion
of the proposition. The latter, however is a hard question in the stability analysis of
non-linear fourth order differential equations. Exhibiting general conditions on h and
the constants is a rich subject in mathematics with many results like [33], [16], [19].
Unfortunately, there is no universal characterization, which ensures that the solutions
of the system are stable. Instead, there are many different conditions that have been
proposed that ensure different desired behavior of the solution curves.
Lemma 2. Let A,B,C be constants with A,B > 0. If 4AB ≥ 2C2 then we have that
F (x, y) = A(|x| − log(1 + |x|)) +By2 + Cxy
1 + |x|≥ 0,
Appendix A. 78
with equality if and only if x = y = 0.
Proof. We observe that F (x, y) is a quadratic polynomial in the y variable, with leading
positive coefficient. Thus F (x, y) ≥ 0 for all x if and only if the discriminant is non-
positive for all x. From the equation we have
D(x) = C2 x2
(1 + |x|)2− 4AB(|x| − log(1 + |x|)).
Since x2 = |x|2 setting |x| = u we have
D(u) = C2 u2
(1 + u)2− 4AB(u− log(1 + u)) u ≥ 0.
Differentiating we get
D′(u) =u
1 + u
[2C2
(1 + u)2− 4AB
].
Since by assumption 2C2 ≤ 4AB and u ≥ 0 we see that D′(u) is negative on (0,∞).
Thus we conclude that D is strictly decreasing on [0,∞). Since D(0) = 0, we conclude
that D(x) ≤ 0 with equality if and only if x = 0. This proves that F (x, y) ≥ 0. Suppose
now that F (x, y) = 0. The quadratic polynomial in y must have a solution so D ≥ 0,
which with our earlier result that D ≤ 0 shows D(x) = 0, hence x = 0. We substitute
and see
0 = F (0, y) = By2, hence y = 0.
We conclude that F (x, y) = 0 implies x = y = 0. The converse is of course immediate
by substitution.
A.3 Stability of System
In this subsection we verify the third and final condition of Theorem 1. We assume that
A,B,C,D are such that a = A, b = AB, c = ABC, d = ABCD satisfy the conditions of
Proposition 1, and show that the trajectories (x(t), ω(t)) are bounded for all t > 0. Let
V be given by
V (x, y, z, w) = 2βd(|x|− log(1+ |x|))+(βb−αd+c)y2 +(αb+a−β)z2 +αw2 +2dxy
1 + |x|+
+2αdxz
1 + |x|+ yz(2αc+ 2βa) + 2βyw + 2zw + δ
xw
1 + |x|,
Appendix A. 79
with coefficients given as in the proof of Proposition 1. We consider the following function
W (x1, x2, x3, x4, x5, x6, x7, x8) = V (x1, x2, x3, x4) + V (x5, x6, x7, x8).
It was shown in Proposition 1 that V ≥ 0 and V (x, y, z, w)→∞ if and only if |x|+ |y|+|z| + |w| → ∞. Thus we have that W ≥ 0 and W → ∞ if and only if
∑8i=1 |xi| → ∞.
Using the equation x = f(x, ω) we get
W = x22(
2d
(1 + |x1|)2−2βc)+x2
3(2αc+2βa−2b)+x24(2−2αa)+2αdx2x3(
1
(1 + |x1|)2−1)+
−δd x21
(1 + |x1|)2− δc x1x2
1 + |x1|− δb x1x3
1 + |x1|− δa x1x4
1 + |x1|+ δ
x2x4
(1 + |x1|)2+
+x26(
2d
(1 + |x5|)2− 2βc) + x2
7(2αc+ 2βa− 2b) + x28(2− 2αa) + 2αdx6∆y(
1
(1 + |x5|)2− 1)+
−δd x25
(1 + |x5|)2− δc x5x6
1 + |x5|− δb x5x7
1 + |x5|− δa x5x8
1 + |∆y|+ δ
x6x8
(1 + |x5|)2+G(t)
where
G(t) =[ω2
2x3 + (aψω1 + bψω2 − Aω2)x7 − 2ω2x8
] [2αx4 + 2βx2 + 2x3 + δ
x1
1 + |x1|
]
+[ω2
2x7 − (−aψω1 − bψω2 + Aω2)x3 + 2ω2x4
] [2αx8 + 2βx6 + 2x7 + δ
x5
1 + |x5|
].
From the proof of Proposition 1 we know that there are positive constants c1, c2, c3, c4
such that
W ≤ −c1(x22 + x2
6)− c2(x23 + x2
7)− c3(x24 + x2
8)− c4
[x2
1
(1 + |x1|)2+
x25
(1 + |x5|)2
]+G(t).
We now have by equation (A.1) that |ω1(t)| ≤ D1e−rt and |ω2(t)| ≤ D2e
−rt for some
positive constants D1, D2, r. This together with the Cauchy-Schwartz inequality shows
that for some positive constants d1, d2, d3, d4 one has
G(t) ≤ e−rt[d1(x2
2 + x26) + d2(x2
3 + x27) + d3(x2
4 + x28) + d4
(x2
1
(1 + |x1|)2+
x25
(1 + |x5|)2
)].
Hence we see that
W ≤ (d1e−rt − c1)(x2
2 + x26) + (d2e
−rt − c2)(x23 + x2
7) + (d3e−rt − c3)(x2
4 + x28)+
Appendix A. 80
+(d4e−rt − c4)
[x2
1
(1 + |x1|)2+
x25
(1 + |x5|)2
].
The latter is clearly non-positive for all large t. Thus W (t) is decreasing for all large
times, and so cannot become arbitrarily large. This implies that W (t) is bounded for
all time and hence so are x1(t), x2(t), x3(t), x4(t), x5(t), x6(t), x7(t), x8(t). On the other
hand, from equation (A.1) we have that ω1(t), ω2(t) are converging to 0 and so are also
bounded for all time. This shows that the trajectories (x(t), ω(t)) are bounded for any
initial condition, which is the third condition of Theorem 1.
From our work above we conclude that if A,B,C,D are such that a = A, b = AB,
c = ABC, d = ABCD satisfy the conditions of Proposition 1, then all the conditions of
Theorem 1 are satisfied and so (0, 0) is a globally asymptotically stable equilibrium for
the cascade system
x = f(x, ω)
ω = s(ω).
This concludes the proof of the stability of our system. We remark that the presented
proof only works if the PI and PD controllers of Chapter 3 for the horizontal position,
pitch and roll are assumed to be just proportional, although the yaw control is handled
for the general PD case. Providing a proof for general controllers appears to be a much
harder task, and will be left for future work.
Bibliography
[1] K. Alexis, G. Nikolakopoulos, and A. Tzes. Experimental model predictive attitude
tracking control of a quadrotor helicopter subject to wind-gusts. In Control Au-
tomation (MED), 2010 18th Mediterranean Conference on, pages 1461–1466, June
2010.
[2] J. Anagnost and C. Desoer. An elementary proof of the routh-hurwitz stability
criterion. Circuits, Systems and Signal Processing, 10(1):101–114, 1991.
[3] R. Austin. Unmanned aircraft systems: UAVS design, development and deployment.
In Wiley, 2010.
[4] G. Birkhoff and G.-C. Rota. Ordinary differential equations - fourth edition. John
Wiley and Sons,Inc, 1989.
[5] D. Bohdanov. Quadrotor UAV control for vision-based moving target tracking task.
Master’s Thesis. University of Toronto, 2012.
[6] S. Bouabdallah, A. Noth, and R. Siegwart. PID vs LQ control techniques applied
to an indoor micro quadrotor. Proceedings of the IEEE International Conference on
Intelligent Robots and Systems, Sendai, Japan, 2004.
[7] Zehra Ceren and Erdinc Altug. Image based and hybrid visual servo control of an
unmanned aerial vehicle. Journal of Intelligent and Robotic Systems, 65(1-4):325–
344, 2012.
[8] F. Chaumette and S. Hutchinson. Visual servo control, part I: Basic approaches.
IEEE Robotics and Automation Magazine, 13(4):82–90, December 2006.
[9] F. Chaumette and S. Hutchinson. Visual servo control. II. advanced approaches
[tutorial]. Robotics Automation Magazine, IEEE, 14(1):109–118, 2007.
81
Bibliography 82
[10] P. Corke and S. Hutchinson. A new hybrid image-based visual servo control scheme.
In In Proceedings of the 39th IEEE Conference on Decision and Control, volume 2,
2000.
[11] M. Dempsey. Eyes of the army US army roadmap for unmanned aircraft systems
2010-2035. In Technical report, US Army UAS Center of Excellence, Ft. Rucker,
Alabma, 2010.
[12] E. Frew, T. McGee, ZuWhan Kim, Xiao Xiao, S. Jackson, M. Morimoto, S. Rathi-
nam, J. Padial, and Raja Sengupta. Vision-based road-following using a small au-
tonomous aircraft. In Aerospace Conference, 2004. Proceedings. 2004 IEEE, vol-
ume 5, pages 3006–3015 Vol.5, March 2004.
[13] N.R. Gans and S.A. Hutchinson. A switching approach to visual servo control. In
Intelligent Control, 2002. Proceedings of the 2002 IEEE International Symposium
on, pages 770–776, 2002.
[14] Estelle Courtial Guillaume Allibert and Francois Chaumette. Visual servoing via
nonlinear predictive control. In Visual Servoing via Advance Numerical Methods,
pages 375–393, 2010.
[15] T. Hamel and R. Mahony. Visual servoing of an under-actuated dynamic rigid-body
system: an image-based approach. Robotics and Automation, IEEE Transactions
on, 18(2):187–198, Apr 2002.
[16] M. Harrow. On the boundedness and the stability of solutions of some differential
equations of the fourth order. SIAM J. Math. Anal, 1(1):27–32, 1970.
[17] H. Huang, G. Hoffmann, S. Waslander, and S. Tomlin. Aerodynamics and control
of autonomous quadrotor helicopters in aggressive maneuvering. IEEE Int. Conf.
on Robotics and Automation, Kobe, Japan, pages 3277–3282, 2009.
[18] S. Hutchinson, G.D. Hager, and P.I. Corke. A tutorial on visual servo control.
Robotics and Automation, IEEE Transactions on, 12(5):651–670, 1996.
[19] K. Kaufman and M. Harrow. A stability result for solutions of certain fourth order
differential equations. SIAM J. Math. Anal, 20(2-3):186–194, 1971.
[20] H. Khalil. Nonlinear systems - second edition. Prentice-Hall, Inc, 1996.
Bibliography 83
[21] S. Kim, D. Lee, and H.J. Kim. Visual servoing for an autonomous quadrotor with
adaptive backstepping control. In 11th International Conference on Control, Au-
tomation and Systems, pages 532–537, Oct 2011.
[22] D. Lee, H. Lim, H. Jin Kim, and Y. Kim. Adaptive image-based visual servoing for
an underactuated quadrotor system. Journal of Guidance, Control, and Dyanmics,
35(4):1335–1353, 2012.
[23] Jun Li and Yuntang Li. Dynamic analysis and PID control for a quadrotor. In
Mechatronics and Automation (ICMA), 2011 International Conference on, pages
573–578, Aug 2011.
[24] Ezio Malis, Francois Chaumette, and Sylvie Boudet. 2 1/2 d visual servoing. IEEE
TRANS. ON ROBOTICS AND AUTOMATION, 15:238–250, 1999.
[25] M.Huang, B. Xian, C. Diao, K. Yang, and Y Feng. Adaptive tracking control of
underactuated quadrotor unmanned aerial vehicle via backstopping. In American
Control Conference, pages 2076–2081, June 2010.
[26] R. Ozawa and F. Chaumette. Dynamic visual servoing with image moments for a
quadrotor using a virtual spring approach. In Robotics and Automation (ICRA),
2011 IEEE International Conference on, pages 5670–5676, May 2011.
[27] M. Popova and Hugh H.T.Liu. Image-based visual servoing for a quadrotor UAV in
tracking static targets. In 62nd CASI Aeronautics Conference and AGM, May 2015.
[28] M. Popova and Hugh H.T. Liu. Position-based visual servoing for target tracking
by a quadrotor UAV. In AIAA Guidance, Navigation and Control Conference, 2016
(to appear).
[29] S. Saripalli, J.F. Montgomery, and G. Sukhatme. Vision-based autonomous landing
of an unmanned aerial vehicle. In Robotics and Automation, 2002. Proceedings.
ICRA ’02. IEEE International Conference on, volume 3, pages 2799–2804, 2002.
[30] Z. Sarris. Survey of UAV applications in civil markets (june 2001). In 9th IEEE
Mediterranean Conference on Control and Automation (MED 01), Croatia, 2001.
[31] V. Sundarapandian. Global asymptotic stability of nonlinear cascade systems. Ap-
plied Mathematics Letters, 15(3):275–277, 2002.
[32] A. Tayebi and S. McGilvray. Attitude stabilization of a VTOL quadrotor aircraft.
Control Systems Technology, IEEE Transactions, 14(3):562–571, 2006.
Bibliography 84
[33] C. Tunc. Stability and boundedness of solutions to certain fourth-order differential
equations. Electronic Journal of Differential Equations, 35:1–10, 2006.
[34] H. Voos. Nonlinear control of a quadrotor micro-UAV using feedback-linearization.
In Mechatronics, 2009. ICM 2009. IEEE International Conference on, pages 1–6,
April 2009.
[35] D. Weatherington. Unmanned aircraft systems roadmap 2005-2030. In Technical
report, Office of the Secretary of Defense, December 2005.
[36] H. Zhu. Indoor simulation of wildfire detection and monitoring using UAV. Master’s
Thesis. University of Toronto, 2012.