+ All Categories
Home > Documents > Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an...

Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an...

Date post: 09-Jul-2020
Category:
Upload: others
View: 14 times
Download: 0 times
Share this document with a friend
8
Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control Miguel A. Olivares-Mendez Iv´ an F. Mondrag´ on ∗∗ Pascual Campoy ∗∗∗ Automation Research group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, Luxembourg (e-mail: [email protected]) ∗∗ Pontificia Universidad Javeriana. Industrial Enginering Department. Bogot´ a, Colombia (email: [email protected]) ∗∗∗ Computer Vision Group (CVG), Centro de Automatica y Robotica (CAR), Universidad Politecnica de Madrid (UPM), Madrid, Spain (e-mail: [email protected]) Abstract: This paper presents a vision based autonomous landing control approach for unmanned aerial vehicles (UAV). The 3D position of an unmanned helicopter is estimated based on the homographies estimated of a known landmark. The translation and altitude estimation of the helicopter against the helipad position are the only information that is used to control the longitudinal, lateral and descend speeds of the vehicle. The control system approach consists in three Fuzzy controllers to manage the speeds of each 3D axis of the aircraft’s coordinate system. The 3D position estimation was proven first, comparing it with the GPS + IMU data with very good results. The robust of the vision algorithm against occlusions was also tested. The excellent behavior of the Fuzzy control approach using the 3D position estimation based in homographies was proved in an outdoors test using a real unmanned helicopter. Keywords: Fuzzy control, Computer vision, Aircraft control, Autonomous vehicle, Robot Navigation, Extended Kalman filters, Position estimation, Velocity control 1. INTRODUCTION The unmanned aerial vehicles (UAV) have made its way quickly and decisively to the forefront of current aviation technology. Opportunities exist in a broadening number of fields for the application of UAV systems as the com- ponents of these systems become increasingly lighter and more powerful. Of particular interest are those occupa- tions that require the execution of missions which depend heavily on dull, dirty, or dangerous work, UAVs provide a cheap, safe alternative to manned systems and often provide a far greater magnitude of capability. The big potential of the UAVs is used for a large number of civil ap- plications, like surveillance, inspection, autonomous navi- gation, among others. This work is focused con the specific task of the autonomous landing. The are some works focused on the theoretical control part of this problem that have been checked in simulation environments like Cesetti et al. (2010) present a classical PID control using the SIFT vision algorithm, proving the feasibility of this al- gorithm for this specific task and testing the controllers in a simulated environment. De Wagter and Mulder (2005), the authors have evaluated the use of visual information at different stages of a UAV control system, including a visual controller and a pose estimation for autonomous landing using a chessboard pattern. In Fucen et al. (2009) a visual system is used to detect, identify a landing zone (helipad) and confirm the landing direction of the vehicle. Saripalli and Sukhatme (2007), Saripalli et al. (2003) proposed an experimental method for autonomous landing on a moving target, by tracking a known helipad and using it to complement the controller IMU+GPS state estimation. Some works have also present real tests with a VTOL aircraft like Saripalli et al. (2002) have developed a fusion sensor control system using GPS to localize the landmark, vision to track it, and sonars for the las three meters of the autonomous landing task. Merz et al. (2004), Merz et al. (2006), that use a method that fuses visual and inertial information in order to control an autonomous helicopter landing on known landmarks. Hermansson (2010) presents excellent results of an autonomous landing using fusion sensor (GPS, compass and vision) with a PID controller for track the landing location and land on a landmark. Vision based landing for multi-rotor UAVs has been an actively studied field in recent years. Some examples are the work presented by Lange in Lange et al. (2008) where the visual system is used to estimate a vehicle position relative to a landing place. Voos (2009), Voos and Bou- Ammar (2010) propose a decomposition of a quadrotor control system in an outer-loop velocity control and an inner-loop attitude control system. In which the landing controller consists of a linear altitude controller and a
Transcript
Page 1: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

Autonomous Landing of an UnmannedAerial Vehicle using Image-Based Fuzzy

Control

Miguel A. Olivares-Mendez ∗ Ivan F. Mondragon ∗∗

Pascual Campoy ∗∗∗

∗ Automation Research group, Interdisciplinary Centre for Security,Reliability and Trust (SnT), University of Luxembourg, Luxembourg

(e-mail: [email protected])∗∗ Pontificia Universidad Javeriana. Industrial Enginering

Department. Bogota, Colombia(email: [email protected])

∗∗∗ Computer Vision Group (CVG), Centro de Automatica y Robotica(CAR), Universidad Politecnica de Madrid (UPM), Madrid, Spain

(e-mail: [email protected])

Abstract: This paper presents a vision based autonomous landing control approach forunmanned aerial vehicles (UAV). The 3D position of an unmanned helicopter is estimated basedon the homographies estimated of a known landmark. The translation and altitude estimationof the helicopter against the helipad position are the only information that is used to control thelongitudinal, lateral and descend speeds of the vehicle. The control system approach consistsin three Fuzzy controllers to manage the speeds of each 3D axis of the aircraft’s coordinatesystem. The 3D position estimation was proven first, comparing it with the GPS + IMU datawith very good results. The robust of the vision algorithm against occlusions was also tested.The excellent behavior of the Fuzzy control approach using the 3D position estimation based inhomographies was proved in an outdoors test using a real unmanned helicopter.

Keywords: Fuzzy control, Computer vision, Aircraft control, Autonomous vehicle, RobotNavigation, Extended Kalman filters, Position estimation, Velocity control

1. INTRODUCTION

The unmanned aerial vehicles (UAV) have made its wayquickly and decisively to the forefront of current aviationtechnology. Opportunities exist in a broadening numberof fields for the application of UAV systems as the com-ponents of these systems become increasingly lighter andmore powerful. Of particular interest are those occupa-tions that require the execution of missions which dependheavily on dull, dirty, or dangerous work, UAVs providea cheap, safe alternative to manned systems and oftenprovide a far greater magnitude of capability. The bigpotential of the UAVs is used for a large number of civil ap-plications, like surveillance, inspection, autonomous navi-gation, among others. This work is focused con the specifictask of the autonomous landing. The are some worksfocused on the theoretical control part of this problemthat have been checked in simulation environments likeCesetti et al. (2010) present a classical PID control usingthe SIFT vision algorithm, proving the feasibility of this al-gorithm for this specific task and testing the controllers ina simulated environment. De Wagter and Mulder (2005),the authors have evaluated the use of visual information atdifferent stages of a UAV control system, including a visualcontroller and a pose estimation for autonomous landingusing a chessboard pattern. In Fucen et al. (2009) a visual

system is used to detect, identify a landing zone (helipad)and confirm the landing direction of the vehicle. Saripalliand Sukhatme (2007), Saripalli et al. (2003) proposedan experimental method for autonomous landing on amoving target, by tracking a known helipad and using itto complement the controller IMU+GPS state estimation.Some works have also present real tests with a VTOLaircraft like Saripalli et al. (2002) have developed a fusionsensor control system using GPS to localize the landmark,vision to track it, and sonars for the las three meters of theautonomous landing task. Merz et al. (2004), Merz et al.(2006), that use a method that fuses visual and inertialinformation in order to control an autonomous helicopterlanding on known landmarks. Hermansson (2010) presentsexcellent results of an autonomous landing using fusionsensor (GPS, compass and vision) with a PID controllerfor track the landing location and land on a landmark.

Vision based landing for multi-rotor UAVs has been anactively studied field in recent years. Some examples arethe work presented by Lange in Lange et al. (2008) wherethe visual system is used to estimate a vehicle positionrelative to a landing place. Voos (2009), Voos and Bou-Ammar (2010) propose a decomposition of a quadrotorcontrol system in an outer-loop velocity control and aninner-loop attitude control system. In which the landingcontroller consists of a linear altitude controller and a

Page 2: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

nonlinear 2D-tracking controller. Chitrakaran et al. (2005)present a deep theoretical work of a non-linear controllerof a quadrotor that is built upon the homography-basedtechniques and Lyapunov design methods. Recently, Non-ami et al. (2010) and then Wenzel Wenzel et al. (2011)have presented two different methods for small UAV au-tonomous takeoff, tracking and landing on a moving plat-form. The first is based on optical flow, the second uses IRlandmarks visual tracking to estimate the aircraft position.Venugopalan et al. (2012) present very good results of anautonomous landing of a AR.Drone on a landing pad on akayak.

In this work is presented a Fuzzy control vision-basedapproach for the autonomous landing task. This 3D po-sition estimation of a VTOL aircraft is done using ho-mographies of a know landmark or helipad. The Fuzzycontrol approach works without any information aboutthe model of the system, managing the longitudinal andlateral speeds, and the altitude of the helicopter. The useof the homographies using Lucas-Kanade and RANSACgets good results despite the occlusion of the detectedlandmark, being this method ideal for this specific task.The present Fuzzy control approach manage the low rateof the vision control loop of 8 Hz and the vibration of thecamera to accomplish successfully real tests with a reducedRMSE value, and without using any other sensor.

The outline of this paper is organized as follows: Section2 introduces the 3D position estimation based on homgra-phies. Section 3 shows the longitudinal and lateral speeds,and the altitude controllers for the autonomous landingtask. Section 4 presents the RC helicopter used, the testsof the 3D position estimation using homographies, and areal test of an autonomous landing. Conclusions and futurework are presented in section 5.

2. 3D ESTIMATION BASED ON HOMOGRAPHIES

Next is explained how the frame-to-frame homography isestimated using matched points and robust model fittingalgorithms. For it, the pyramidal Lucas-Kanade opticalflow Bouguet Jean Yves (1999) on corners detected usingthe method of Shi and Tomasi Shi and Tomasi (1994) isused to generate a set of corresponding points, then, aRANSAC Fischer and Bolles (1981) algorithm is used torobustly estimate projective transformation between thereference object and the image. Next section explains howthis frame-to-frame is used to obtain the 3D pose of theobject with respect to the camera coordinate system.

On images with high motion, good matched featurescan be obtained using the well known Pyramidal Lucas-Kanade algorithm modification Bouguet Jean Yves (1999).It is used to solve the problem that arise when largeand non-coherent motion are present between consecutiveframes, by first tracking features over large spatial scaleson the pyramid image, obtaining an initial motion estima-tion, and then refining it by down sampling the levels ofthe images pyramid until it arrives at the original scale.

The set of corresponding or matched points between twoconsecutive images ((xi, yi) ↔ (x�

i, y�i) for i = 1 . . . n,)

obtained using the pyramidal Lucas-Kanade optical flowis used to compute the 3x3 matrix H that takes each xi

to x�i or x�

i = Hxi or the Homography that relates bothimages. The matched points often have two error sources.The first one is the measurement of the point position,which follows a Gaussian distribution. The second one isthe outliers to the Gaussian error distribution, which arethe mismatched points given by the selected algorithm.These outliers can severely disturb the estimated homog-raphy, and consequently alter any measurement based onhomographies. In order to select a set of inliers from thetotal set of correspondences so that the homography canbe estimated employing only the set of pairs considered asinliers, robust estimation using Random Sample Consensus(RANSAC) algorithm Fischer and Bolles (1981) is used. Itachieves its goal by iteratively selecting a random subset ofthe original data points by testing it to obtain the modeland evaluating the model consensus, which is the totalnumber of original data points that best fit the model.In the case of a Homography, four correspondences areenough to have a exact solution or minimal solution usingthe Inhomogeneous method Criminisi et al. (1999). Thisprocedure is then repeated a fixed number of times, eachtime producing either a model which is rejected becausetoo few points are classified as inliers, or a refined model.When total trials are reached, the algorithm returns theHomography with the largest number of inlier.

2.1 World Plane Projection onto The Image Plane

In order to align the planar object on the world space andthe camera axis system, we consider the general pinholecamera model and the homogeneous camera projectionmatrix, that maps a world point xw in P3 to a point xi onith image in P2, defined by equation 1:

sxi = Pixw = K[Ri|ti]xw = K�ri1 ri2 ri3 ti

�xw (1)

where the matrix K is the camera calibration matrix, Ri

and ti are the rotation and translation that relates theworld coordinate system and camera coordinate system,and s is an arbitrary scale factor. Figure 1 shows therelation between a world reference plane and two imagestaken by a moving camera, showing the homographyinduced by a plane between these two frames.

Fig. 1. Projection model on a moving camera and frame-to-frame homography induced by a plane.

If point xw is restricted to lie on a plane Π , with acoordinate system selected in such a way that the planeequation of Π is Z = 0, the camera projection matrix canbe written as equation 2:

Page 3: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

sxi = PixΠ = Pi

XY01

= �Pi�

�XY1

�(2)

where �Pi� denotes that this matrix is deprived on itsthird column or �Pi� = K

�ri1 ri2 ti

�. The deprived camera

projection matrix is a 3 × 3 projection matrix, whichtransforms points on the world plane ( now in P2) to theith image plane (likewise in P2), that is none other thata planar homography Hi

w defined up to scale factor asequation 3 shows.

Hiw = K

�ri1 ri2 ti

�= �Pi� (3)

Equation 3 defines the homography which transformspoints on the world plate to the ith image plane. Any pointon the world plane xΠ = [xΠ, yΠ, 1]T is projected on theimage plane as x = [x, y, 1]T . Because the world planecoordinates system is not know for the ith image, Hi

w cannot be directly evaluated. However, if the position of theword plane for a reference image is known, a homographyH0

w, can be defined. Then, the ith image can be relatedwith the reference image to obtain the homography Hi

0.This mapping is obtained using sequential frame-to-framehomographies Hi

i−1, calculated for any pair of frames (i-

1,i) and used to relate the ith frame to the first image Hi0

using equation 4:

Hi0 = Hi

i−1Hi−1i−2 · · ·H1

0 (4)

This mapping and the aligning between initial frame toworld plane reference is used to obtain the projection be-tween the world plane and the ith image Hi

w = Hi0H

0w. In

order to relate the world plane and the ith image, we mustknow the homography H0

w. A simple method to obtain it,requires to match four points on the image with the corre-sponding corners of the rectangle in the scene, forming thematched points (0, 0) ↔ (x1, y1), (0,ΠWidth) ↔ (x2, y2),(ΠLenght, 0) ↔ (x3, y3) and (ΠLenght,ΠWidth) ↔ (x4, y4).This process can be done by both, a helipad frame and cor-ners detector or by an operator through a ground stationinterface. The helipad points selection generates a worldplane defined in a coordinate frame in which the planeequation of Π is Z = 0. With these four correspondencesbetween the world plane and the image plane, the min-imal solution for homography H0

w =�h1

0w h2

0w h3

0w

�is

obtained.

2.2 Translation Vector and Rotation Matrix

The rotation matrix and the translation vector are com-puted from the plane to image homography using themethod described in Zhang (2000).

From equation 3 and defining the scale factor λ = 1/s, wehave that

[r1 r2 t] = λK−1Hiw = λK−1 [h1 h2 h3]

where

r1 = λK−1h1, r2 = λK−1h2, t = λK−1h3

(5)

The scale factor is calculated as λ = 1�K−1h1� .

Because the columns of the rotation matrix must be or-thonormal, the third vector of the rotation matrix r3 couldbe determined by the cross product of r1 × r2. However,the noise on the homography estimation causes that theresulting matrix R = [r1 r2 r3] does not satisfy the or-thonormality condition and we must find a new rotationmatrix R� that best approximates to the given matrix Raccording to smallest Frobenius norm for matrices (theroot of the sum of squared matrix coefficients) Sturm(2000) Zhang (2000). As demonstrated by Zhang (2000),this problem can be solved by forming the Rotation MatrixR = [r1 r2 (r1 × r2)] = USVT and using singular valuedecomposition (SVD) to form the new optimal rotationmatrix R� = UVT .

The solution for the camera pose problem is defined asxi = PiX = K[R�|t]X.

The translational vector obtained is already scaled basedon the dimensions defined for the reference plane duringthe alignment between the helipad and image I0, so if thedimensions of the world rectangle are defined in mm, theresulting vector tiw = [x, y, z]t is also in mm. In Mondragonet al. (2010), it is show how the Rotation Matrix can bedecomposed in order to obtain the Tait-Bryan or CardanAngles, which is one of the preferred rotation sequences inflight and vehicle dynamics. Specifically, these angles areformed by the sequence: (1 ) ψ about z axis (yaw Rz,ψ),(2) θ about ya (pitch Ry,θ), and (3) φ about the final xb

axis (roll Rx,φ), where a and b denote the second and thirdstage in a three-stage sequence or axes.

2.3 Estimation Filtering.

An extended Kalman Filter (EKF) has been incorporatedin the 3D pose estimation algorithm in order to smooth theposition and correct the errors caused by the homographydrift along time. The state vector is defined as the position[xk, yk, zk] and velocity [Δxk,Δyk,Δzk] of the kth helipadexpressed in the onboard camera coordinate system. Weconsider the dynamic model as a linear system withconstant velocity, as presented in the following equations:

xk = Fxk−1 + wk (6)

xk

ykzk

Δxk

ΔykΔzk

=

1 0 0 Δt 0 00 1 0 0 Δt 00 0 1 0 0 Δt0 0 0 1 0 00 0 0 0 1 00 0 0 0 0 1

xk−1

yk−1

zk−1

Δxk−1

Δyk−1

Δzk−1

+wt−1

(7)

Where xk−1 is the state vector (position and velocity), F isthe system matrix, w the process noise, and Δt representsthe time step.

Because the visual system only estimates the position ofthe helipad, the measurements are expressed as follows:

zk =

�xk

ykzk

�+ vk (8)

Where zk is the measurement vector and [xk,yk,zk]t isthe position of the helipad with respect to the camera

Page 4: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

coordinate system and vk is measurement noise . With theprevious definitions, the two phases of the filter Predictionand Correction can be formulated as presented in Welchand Bishop (1995), assuming that the process noise wk

and the measurement noise vk are white, zero-mean, Gaus-sian noise with covariance matrix Q and R, respectively.The output of the filter is the smoothed position of thehelipad, that will be used as input for the control system.

This method is similar to the one propose by Simon et al.(2000), Simon and Berger (2002) and is deeply detailed inMondragon et al. (2010)

3. FUZZY CONTROL APPROACH FORAUTONOMOUS LANDING

Three controllers were design to control the aircraft forthe autolanding task. All of these controllers were devel-oped using the software MOFS (Miguel Olivares’ FuzzySoftware). The three developed controllers have as inputsthe homography estimation of the altitude, the lateraland the longitudinal errors. The controllers commands thethrust, and lateral and longitudinal speeds of the UAV.The altitude controller was developed first independentlyand tested Olivares-Mendez et al. (2010). After checkingthe correct behavior of this controller, we design the lateraland longitudinal speeds controller for a complete controlfor the autonomous landing task. The three controllerswere defined as PD-like. The design of the membershipfunction of the controller was done using triangular mem-bership functions, based on the good results obtained inthe previous works of the authors. The definition of thevariables’ sets and the rules’ base is based on heuristicinformation. This data was acquired from different manualand hover flight tests over the helipad.

The control system is based on the camera configuration,like a eye-to-hand configuration, because the camera isfixed on the UAV. The position of the camera with respectto the robot, follows an eye-in-hand type. The architectureof the visual and servo system is a dynamic look-and-movesystem, that sends velocity commands.

The thrust controller was implemented for control thealtitude of the UAV during the autolanding task (Figure2). It was design with two inputs and one output. The twoinputs are: the estimation of the altitude, that is madeby the homography (Figure 2(a)), and the derivate of thisvalue (Figure 2(b)). The output of the controller is thevelocity command, in meters per second, that is executedby the aircraft to descend to the helipad location (Figure2(c)).

The lateral and longitudinal speed controllers are quitesimilar, the only thing that changes is the linguistic valueof the membership functions’ sets. As well as the thrustcontroller these controllers have a PD-like definition. Thelateral speed controller is shown in Figure 3. The firstinput is the lateral error estimation using the 3D positionestimation of the homography (Figure 3(a)). The secondinput is the derivate of this error (Figure 3(b)). The outputof the controllers is the lateral speed command in m/s tosent to the UAV (Figure 3(c)).

The longitudinal speed controller is shown in Figure 4. Thefirst input is the front/back error estimation using the 3D

(a) Estimation of the altitude (mm), based on the homog-raphy of the helipad.

(b) Derivate of the altitude estimation (mm/s).

(c) Output of the Fuzzy Controller, velocity commandsfor the UAV’s thrust in m/s.

Fig. 2. Fuzzy controller for UAV’s altitude.

(a) Estimation of the lateral error (m), based on thehomography of the helipad.

(b) Derivate of the lateral error (m/s).

(c) Output of the Fuzzy Controller, velocity commandsfor the UAV’s lateral speed in m/s.

Fig. 3. Fuzzy controller for UAV’s lateral speed.

position estimation of the homography (Figure 4(a)). Thesecond input is the derivate of this error (Figure 3(b)).The output of the controllers is the longitudinal speedcommand in m/s to sent to the UAV (Figure 4(c)).

Page 5: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

(a) Estimation of the front/back error (m), based on thehomography of the helipad.

(b) Derivate of the front/back error (m/s).

(c) Output of the Fuzzy Controller, velocity commandsfor the UAV’s longitudinal speed in m/s.

Fig. 4. Fuzzy controller for UAV’s longitudinal speed.

The product t-norm is used for rules conjunction. Sincerule weights will be optimized with CE method, the de-fuzzification method used in this approach is a modifica-tion of the height method. We introduce the value of theweight assigned to each rule in the defuzzification process.Equation 9 shows the defuzzification method.

y =

�Ml=1 y

l�N

i=1

�µxl

i(xi)

��M

l=1

�Ni=1

�µxl

i(xi)

� (9)

Where N and M represent the number of inputs variablesand total number of rules respectively. µxl

idenote the

membership function of the lth rule for the ith inputvariable. yl represent the output of the lth rule.

4. EXPERIMENTS

4.1 UAV platform

To test the Fuzzy control approach and the 3D positionestimation for the autonomous landing task a real RChelicopter has been used. This aircraft is an electrichelicopter SR20, shown in Figure 5. This is a modifiedXcell Electric RC helicopter.

This aircraft is equipped with an Xscale-based flight com-puter augmented with sensors (GPS, IMU, Magnetometer,fused with a Kalman filter for state estimation). Addition-ally it has a VIA mini-ITX 1.5 GHz onboard computerwith 2 Gb RAM, a wireless interface. The system runsin a client-server architecture using TCP/UDP messages

Fig. 5. Autonomous electric helicopter SR20.

with Ubuntu Linux OS working in a multi-client wireless802.11g ad-hoc network, allowing the integration of visionsystems and vision tasks with the flight control. Thisarchitecture allows embedded applications to run onboardthe autonomous helicopter while it interacts with externalprocesses through a high level switching layer. The visualcontrol system and additional external processes are alsointegrated with the flight control through this layer usingTCP/UDP messages. The layer is based on a communica-tions API where all messages and data types are defined.

The selected vision sensor used is a Monocromo CCDFirewire camera with a resolution of 640x480 pixels isused. The camera is calibrated before each test, so theintrinsic parameters are known. The camera is installed insuch a way that it is looking downward with relation to theUAV. A rectangular helipad with known measures is usedas the reference object to estimate the 3D position of theUAV. It is aligned in such a way that its axes are parallel tothe local plane North East axes. This helipad was designedin such a way that it produces many distinctive corners forthe visual tracking. Figure 6, shows the helipad used andthe coordinate systems involved in the pose estimation.

Fig. 6. Helipad, camera and U.A.V coordinate systems

Figure 7 shows the control loop designed for the Fuzzycontrol approach using vision for this specific task. In thisFigure it can be seen that the UAV has internal controlloops for the stability of the system, based on the IMuand GPS information. The presented control approach isa external control loop based on vision that works at 8 Hz,it means that the system process 8 frames per seconds.

Page 6: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

Fig. 7. Control Loop for the autonomous landing task.

4.2 Tests of the 3D Position Estimation

Next are presented the position estimation tests basedon the homography. The test begins when the UAV ishovering over the helipad, a moment in which the helipad isdetected, tracked and used for estimating the 3D positionof landmark w.r.t aircraft. One test begins at 4.2 meters ofaltitude and the other test at 10 meters. The estimated 3Dposition is compared with helicopter position estimatedby the autopilot (IMU+GPS data) on the local planewith reference to the takeoff point (center of the Helipad).Because the local tangent plane to the helicopter is definedin such a way that the X axis is the North position,the Y axis is the East position and Z axis is the DownPosition (negative), the Measured X and Y values mustbe rotated according with the helicopter heading or Y awangle, in order to be comparable with the estimated valuesobtaining from the homographies. Figures 8, 9 and 10shows the landmark position with respect to the UAV

0 20 40 60 80 100 120 140 160 180 200−1000

−500

0

500

1000X displacement Flight 1

X m

m

RMSE = 171

0 50 100 150 200 250 300 350 400−4000

−2000

0

2000

4000X displacement Flight 2

frame

X m

m

RMSE = 492.5 Homography estimationI.M.U data

Fig. 8. Measures of the X axis of the UAV (Roll) based onthe homography estimation.

Results show a good performance of the visual valuescompared with the IMU+GPS state estimated data. Ingeneral, estimated and state estimation data have the samebehavior for both test sequences. For X and Y, there is asmall error between the aircraft pose state and the valuesestimated using the visual system, giving a maximum rootmean squared error RMSE of 0.42 m in X axis and 0.16 min Y axis. The estimated altitude position Z have a smallerror for flight 1 with a RMSE of 0.16 m and 0.85 m intest 2. Although results are good for height estimation, isimportant to remember that the state altitude estimationhas an accuracy of ±0.5 m, causing that the reference alti-tude estimation used to validate our approach have a biguncertainty. Finally, the Y aw angle is correctly estimated,

0 20 40 60 80 100 120 140 160 180 200−500

0

500

1000

1500Y displacement Flight 1

Y m

m

RMSE = 82.7

0 50 100 150 200 250 300 350 4000

1000

2000

3000

4000Y displacement Flight 2

frame

Y m

m

RMSE = 163.5

Homography estimationI.M.U data

Fig. 9. Measures of the Y axis of the UAV (Pitch) basedon the homography estimation.

0 20 40 60 80 100 120 140 160 180 2004000

4200

4400

4600

4800

5000Z displacement Flight 1

Z m

m

RMSE = 161

0 50 100 150 200 250 300 350 4006000

8000

10000

12000Z displacement Flight 2

frame

Z m

m

RMSE = 857.3 Homography estimationI.M.U data

Fig. 10. Measures of the altitude of the UAV based on thehomography estimation.

presenting for the first flight and error of 2o between theIMU and the estimated data, and 4o for the second tests.

Results have also shown that the system correctly estimatethe 3D position when a maximum of the 70 % of thelandmark is partially occluded or out of the camera fieldof view as Figure 11 shows.

4.3 Real tests

For the autonomous landing test the helicopter is takeoff and flight initially by remote control. During all thetest is possible to see the camera onboard image inthe ground station. When the aircraft has an altitudearound four meters and the helipad is in the field ofview of the camera, it is selected autonomously andthe image processing starts. The longitudinal and lateralcontrollers are working during all the time that the imageprocessing is activated. But, the altitude controller worksonly when the lateral and longitudinal errors are lowerthan a fixed value, it means when the helipad is centeredin the image. Furthermore, the altitude controller willstops when the UAV’s altitude estimation is lower than1.5 meters, reducing the power of the motor to finish thelanding task.

Page 7: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

Fig. 11. 3D pose estimation occlusion robustness. Thesystem correctly estimate the 3D position when amaximum of the 70 % of the landmark is partiallyoccluded or out of the camera field of view.

Figure 12 shows the 3D reconstruction of the autonomouslanding test using the GPS data. Figure 13 shows themeasured done using the 3D positioning based on thehomography estimation for longitudinal and lateral errors,and the altitude estimation. In this flight the autonomouslanding start at 4 meters. The RMSE value for the longi-tudinal measures in this experiment is 0.7344 meters andfor the lateral measures is 0.7199.

Fig. 12. 3D flight reconstruction of a fully autonomouslanding test.

The autonomous landing task was accomplish successfullywith the controller and vision approach developed in thiswork. The value of the RMSE in the lateral and thelongitudinal errors are inside a comprehensive limits takinginto account the high vibration of the aircraft and thedelay of the response on this type of system and the highsensibility to wind disturbances of a VTOL.

Some videos related to this work could be found at M. A.Olivares-Mendez (2013).

Fig. 13. Homography estimation for longitudinal and lat-eral error, and altitude estimation for a fully au-tonomous landing.

5. CONCLUSIONS AND FUTURE WORK

In this work is presented a Fuzzy control approach tomanage the longitudinal and lateral speeds, and the al-titude of an unmanned aerial vehicle using vision for theautonomous landing task. An unmanned helicopter witha downward camera was used for this specific task. Theimage processing is done in a onboard computer, andestimate the position of the UAV based on the homographyestimation of a know helipad. This information is filteredby an extended Kalman filter, and then is sent to thecontrol system to keep the helipad centered in the imageby the longitudinal and lateral speed controllers, and toland on it by the altitude controller. The high vibrationsof the UAV that are propagated to the fixed downwardcamera affects the estimation of the position done by theimage processing algorithm, but are manage in a gentleway by the Fuzzy control approach to accomplish the tasksuccessfully with a reduced RMSE. value.

After accomplish the autonomous landing task with astatic helipad the next step to do is to land on a groundmoving target, and continue with a ship. To increase theaccuracy of the estimation, the authors are working onfusion other senors like a laser.

ACKNOWLEDGEMENTS

The work reported in this paper is the consecution ofseveral research stages at the Computer Vision Group -Universidad Politecnica de Madrid. The authors wouldlike to thank the Universidad Politecnica de Madrid, theConsejerıa de Educacion de la Comunidad de Madridand the Fondo Social Europeo (FSE) for some of theAuthors PhD Scholarship, the Australian Research Centrefor Aerospace Automation and the European CommissionCORDIS. This work has been sponsored by the SpanishScience and Technology Ministry under the grant CICYTDPI2010-20751-C02-01, and China Scholarship Council(CSC).

REFERENCES

Bouguet Jean Yves (1999). Pyramidal implementationof the lucas-kanade feature tracker. Technical report,Intel Corporation. Microprocessor Research Labs, SantaClara, CA 95052.

Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., andLonghi, S. (2010). A vision-based guidance system foruav navigation and safe landing using natural land-marks. Journal of Intelligent and Robotic Systems, 57(1-4), 233–257.

Page 8: Autonomous Landing of an Unmanned Aerial Vehicle using ...€¦ · Autonomous Landing of an Unmanned Aerial Vehicle using Image-Based Fuzzy Control MiguelA.Olivares-Mendez ∗ Iv´anF.Mondrag´on∗∗

Chitrakaran, V., Dawson, D., Chen, J., and Feemster,M. (2005). Vision assisted autonomous landing of anunmanned aerial vehicle. In Decision and Control, 2005and 2005 European Control Conference. CDC-ECC ’05.44th IEEE Conference on, 1465–1470. doi:10.1109/CDC.2005.1582365.

Criminisi, A., Reid, I.D., and Zisserman, A. (1999). Aplane measuring device. Image Vision Comput., 17(8),625–634.

De Wagter, C. and Mulder, J. (2005). Towards vision-based uav situation awareness. AIAA Guidance, Navi-gation, and Control Conference and Exhibit.

Fischer, M.A. and Bolles, R.C. (1981). Random sampleconcensus: a paradigm for model fitting with appli-cations to image analysis and automated cartography.Communications of the ACM, 24(6), 381–395.

Fucen, Z., Haiqing, S., and Hong, W. (2009). The objectrecognition and adaptive threshold selection in the vi-sion system for landing an unmanned aerial vehicle. InInformation and Automation, 2009. ICIA ’09. Interna-tional Conference on, 117 –122. doi:10.1109/ICINFA.2009.5204904.

Hermansson, J. (2010). Vision and GPS based autonomouslanding of an unmanned aerial vehicle. Ph.D. thesis,Linkping University, Department of Electrical Engineer-ing, Automatic Control, Sweeden.

Lange, S., Sunderhauf, N., and Protzel, P. (2008). Au-tonomous landing for a multirotor UAV using vision.In In Workshop Proceedings of SIMPAR Intl. Conf. onSIMULATION, MODELING and PROGRAMMINGfor AUTONOMOUS ROBOTS, 482–491. Venice, Italy.

M. A. Olivares-Mendez, I. Mondragon, P.C. (2013).Autonomous landing of an unmanned aerialvehicle using image-based fuzzy control. testvideos. www.vision4uav.eu/?q=researchline/autonomousLanding_REDUAS13. Computer VisionGroup. Polytechnic University of Madrid.

Merz, T., Duranti, S., and Conte, G. (2004). Autonomouslanding of an unmanned helicopter based on visionand inbertial sensing. In International Symposium onExperimental Robotics. Singapore.

Merz, T., Duranti, S., and Conte, G. (2006). Autonomouslanding of an unmanned helicopter based on vision andinertial sensing. In M. Ang and O. Khatib (eds.),Experimental Robotics IX, volume 21 of Springer Tractsin Advanced Robotics, 343–352. Springer Berlin / Hei-delberg.

Mondragon, I.F., Campoy, P., Martınez, C., and Olivares-Mendez, M. (2010). 3d pose estimation based on planarobject tracking for UAVs control. In Robotics and Au-tomation (ICRA), 2010 IEEE International Conferenceon, 35 –41. doi:10.1109/ROBOT.2010.5509287.

Nonami, K., Kendoul, F., Suzuki, S., Wang, W., andNakazawa, D. (2010). Guidance and navigation systemsfor small aerial robots. In Autonomous Flying Robots,219–250. Springer Japan.

Olivares-Mendez, M., Mondragon, I., Campoy, P., andMartinez, C. (2010). Fuzzy controller for uav-landingtask using 3d-position visual estimation. In FuzzySystems (FUZZ), 2010 IEEE International Conferenceon, 1–8. doi:10.1109/FUZZY.2010.5584396.

Saripalli, S., Montgomery, J., and Sukhatme, G. (2002).Vision-based autonomous landing of an unmanned

aerial vehicle. In Robotics and Automation, 2002.Proceedings. ICRA ’02. IEEE International Conferenceon, volume 3, 2799 –2804. doi:10.1109/ROBOT.2002.1013656.

Saripalli, S., Montgomery, J.F., and Sukhatme, G.S.(2003). Visually-guided landing of an unmanned aerialvehicle. IEEE Transactions on Robotics and Automa-tion, 19(3), 371–381.

Saripalli, S. and Sukhatme, G.S. (2007). Landing ahelicopter on a moving target. In Proceedings of IEEEInternational Conference on Robotics and Automation,2030–2035. Rome, Italy.

Shi, J. and Tomasi, C. (1994). Good features to track. In1994 IEEE Conference on Computer Vision and PatternRecognition (CVPR’94), 593–600.

Simon, G., Fitzgibbon, A., and Zisserman, A. (2000).Markerless tracking using planar structures in the scene.In Augmented Reality, 2000. (ISAR 2000). Proceedings.IEEE and ACM International Symposium on, 120–128.doi:10.1109/ISAR.2000.880935.

Simon, G. and Berger, M.O. (2002). Pose estimation forplanar structures. Computer Graphics and Applications,IEEE, 22(6), 46–53. doi:10.1109/MCG.2002.1046628.

Sturm, P. (2000). Algorithms for plane-based pose es-timation. In Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, Hilton HeadIsland, South Carolina, USA, 1010–1017.

Venugopalan, T., Taher, T., and Barbastathis, G. (2012).Autonomous landing of an unmanned aerial vehicle onan autonomous marine vehicle. In Oceans, 2012, 1–9.

Voos, H. (2009). Nonlinear landing control for quadrotoruavs. In R. Dillmann, J. Beyerer, C. Stiller, J. Zllner,and T. Gindele (eds.), Autonome Mobile Systeme 2009,Informatik aktuell, 113–120. Springer Berlin Heidelberg.

Voos, H. and Bou-Ammar, H. (2010). Nonlinear trackingand landing controller for quadrotor aerial robots. InCCA, 2136–2141.

Welch, G. and Bishop, G. (1995). An introduction tothe kalman filter. Technical report, University of NorthCarolina at Chapel Hill, Chapel Hill, NC, USA.

Wenzel, K., Masselli, A., and Zell, A. (2011). Automatictake off, tracking and landing of a miniature uav ona moving carrier vehicle. Journal of Intelligent andRobotic Systems, 61, 221–238. doi:10.1007/s10846-010-9473-0.

Zhang, Z. (2000). A flexible new technique for cameracalibration. IEEE Transactions on pattern analysis andmachine intelligence, 22(11), 1330–1334.


Recommended