+ All Categories
Home > Documents > Adaptive servo visual robot control

Adaptive servo visual robot control

Date post: 22-Nov-2023
Category:
Upload: independent
View: 0 times
Download: 0 times
Share this document with a friend
28
Robotics and Autonomous Systems 43 (2003) 51–78 Adaptive servo visual robot control Oscar Nasisi , Ricardo Carelli Instituto de Automática, Universidad Nacional de San Juan, Av. San Mart´ ın (Oeste) 1109, 5400 San Juan, Argentina Received 11 December 2001; received in revised form 27 November 2002 Abstract Adaptive controllers for robot positioning and tracking using direct visual feedback with camera-in-hand configuration are proposed in this paper. The controllers are designed to compensate for full robot dynamics. Adaptation is introduced to reduce the design sensitivity due to robot and payload dynamics uncertainties. It is proved that the control system achieves the motion control objective in the image coordinate system. Simulations are carried out to evaluate the controller performance. Also, discretization and measurement effects are considered in simulations. © 2003 Elsevier Science B.V. All rights reserved. Keywords: Visual motion; Robots; Tracking systems; Non-linear control systems; Adaptive control 1. Introduction The use of visual information in the feedback loop presents an attractive solution to motion control of autonomous manipulators evolving in unstructured environments. In this context, robot motion control uses direct visual sensory information to achieve a desired relative position between the robot and a possibly moving object in the robot envi- ronment, which is called visual servoing. The visual positioning problem arises when the object is static, whereas when the object is moving, the visual tracking problem is established instead. Visual servoing is treated in references as [1–6]. Visual servoing can be achieved either with the so called fixed-camera approach or with the camera-in-hand approach. With the former, cameras fixed in the world-coordinate frame capture images of both the robot and its environment. The objective of this approach is to move the robot in such a way that its end-effector reaches some desired object visually captured by the cameras in the working space [7–10]. With the camera-in-hand configuration, a camera mounted on the robot moves rigidly attached to the robot hand. The objective of this approach is that the manipulator move in such a way that the projection of a static or moving object will be at a desired location in the image as captured by the camera [11–17]. Most of the above cited works, however, have not considered the non-linear robot dynamics in the controller design. These controllers may result in unsatisfactory control under high performance requirements, including high-speed tasks and direct-drive robot actuators. In such cases, the robot dynamics has to be considered in the controller design, as partially done in [18,19] or fully included in [10,20,21]. In the visual servoing control some uncertainties may arise in relation to camera parameters, kinematics and robot dynamics. Some authors have addressed the problem of camera uncertainties, e.g. in [22–25] for different camera Corresponding author. E-mail addresses: [email protected] (O. Nasisi), [email protected] (R. Carelli). 0921-8890/03/$ – see front matter © 2003 Elsevier Science B.V. All rights reserved. doi:10.1016/S0921-8890(02)00370-6
Transcript

Robotics and Autonomous Systems 43 (2003) 51–78

Adaptive servo visual robot control

Oscar Nasisi∗, Ricardo CarelliInstituto de Automática, Universidad Nacional de San Juan, Av. San Mart´ın (Oeste) 1109, 5400 San Juan, Argentina

Received 11 December 2001; received in revised form 27 November 2002

Abstract

Adaptive controllers for robot positioning and tracking using direct visual feedback with camera-in-hand configurationare proposed in this paper. The controllers are designed to compensate for full robot dynamics. Adaptation is introduced toreduce the design sensitivity due to robot and payload dynamics uncertainties. It is proved that the control system achieves themotion control objective in the image coordinate system. Simulations are carried out to evaluate the controller performance.Also, discretization and measurement effects are considered in simulations.© 2003 Elsevier Science B.V. All rights reserved.

Keywords:Visual motion; Robots; Tracking systems; Non-linear control systems; Adaptive control

1. Introduction

The use of visual information in the feedback loop presents an attractive solution to motion control of autonomousmanipulators evolving in unstructured environments. In this context, robot motion control uses direct visual sensoryinformation to achieve a desired relative position between the robot and a possibly moving object in the robot envi-ronment, which is called visual servoing. The visual positioning problem arises when the object is static, whereaswhen the object is moving, the visual tracking problem is established instead. Visual servoing is treated in referencesas[1–6]. Visual servoing can be achieved either with the so called fixed-camera approach or with the camera-in-handapproach. With the former, cameras fixed in the world-coordinate frame capture images of both the robot and itsenvironment. The objective of this approach is to move the robot in such a way that its end-effector reaches somedesired object visually captured by the cameras in the working space[7–10]. With the camera-in-hand configuration,a camera mounted on the robot moves rigidly attached to the robot hand. The objective of this approach is thatthe manipulator move in such a way that the projection of a static or moving object will be at a desired locationin the image as captured by the camera[11–17]. Most of the above cited works, however, have not considered thenon-linear robot dynamics in the controller design. These controllers may result in unsatisfactory control underhigh performance requirements, including high-speed tasks and direct-drive robot actuators. In such cases, the robotdynamics has to be considered in the controller design, as partially done in[18,19]or fully included in[10,20,21].In the visual servoing control some uncertainties may arise in relation to camera parameters, kinematics and robotdynamics. Some authors have addressed the problem of camera uncertainties, e.g. in[22–25]for different camera

∗ Corresponding author.E-mail addresses:[email protected] (O. Nasisi), [email protected] (R. Carelli).

0921-8890/03/$ – see front matter © 2003 Elsevier Science B.V. All rights reserved.doi:10.1016/S0921-8890(02)00370-6

52 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

configurations. Kinematics uncertainty is treated in[26]. With the ever-growing power of visual processing and aconsequent increase in frequency bandwidth of visual controllers, the issues of compensating robot dynamics anddesigning controllers that reduce sensibility to dynamic uncertainties are becoming more important. As regardsuncertainties in robot dynamics, robust control solutions have been proposed in[10,27,28], and adaptive controlsolutions in[29,30] for the fixed-camera visual servoing configuration. This paper deals with the adaptive con-trol of robot dynamics using the camera-in-hand visual servoing approach. In previous work[31,32], the authorshave proposed adaptive controllers for the camera-in-hand configuration assuming uncertainties in robot dynam-ics. The present paper proposes a positioning and a tracking adaptive controller using visual feedback for robotswith camera-in-hand configuration. Feedback signals come directly from internal position and velocity sensors andfrom visual information. It is proved that the positioning control errors converge asymptotically to zero, and thatthe tracking errors for moving objects are ultimately bounded. The controllers are based on the robot’s inversedynamics, the definition of a manifold in the error space[33], an update-law[34], and, for moving objects, onthe estimation of the target velocity. As far as the authors know, these are the first direct visual adaptive stablecontrollers which include non-linear robot dynamics. Although the main contribution of the work is the develop-ment of these adaptive controllers with the corresponding stability proofs, the paper also includes some simulationstudies to show the performance of the proposed controllers. The paper is organized as follows.Section 2presentsthe robot and the camera models. InSection 3, the adaptive controllers for positioning and tracking control objec-tives are presented.Section 4gives the stability analysis for both controllers.Section 5describes the simulationstudies for a two degree-of-freedom (DOF) direct-drive manipulator. Finally,Section 6presents some concludingremarks.

2. Robot and camera models

2.1. Model of the robot

When neither friction nor any other disturbance is present, the joint-space dynamics of ann-link manipulator canbe written as[35]:

H(q)q + C(q, q)q + g(q) = τ, (1)

whereq is then×1 vector of joint displacement,τ then×1 vector of applied joint torques,H(q) then×n symmetricpositive definite manipulator inertia matrix,C(q, q)q then× 1 vector of centripetal and Coriolis torques, andg(q)then × 1 vector of gravitational torques. The robot model,Eq. (1), has some fundamental properties that can beexploited in the controller design[36].

Skew-symmetry. Using a proper definition of matrixC—only the vectorC(q, q)q is uniquely defined—matricesH andC in Eq. (1)satisfy

xT[

dH(q)dt

− 2C(q, q)]

x = 0 ∈ Rn. (2)

Linearity. A part of the dynamics structure inEq. (1)is linear in terms of a suitable selected set of robot and payloadparameters:

H(q)q + C(q, q)q + g(q) = �(q, q, q)θ, (3)

where�(q, q, q) is ann × m matrix andθ is anm × 1 vector containing the selected set of robot and payloadparameters.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 53

2.2. Robot differential kinematics

The differential kinematics of a manipulator gives the relationship between joint velocitiesq, the correspondingend-effector translational velocityWv, and angular velocityWω. They are related through thegeometricJacobianJg(q) [37]:[

WvWω

]= Jg(q)q. (4)

If the end-effector pose (position and orientation) is expressed by regarding a minimal representation in the oper-ational space, it is possible to compute the Jacobian matrix through differentiation of the direct kinematics withrespect to joint positions. The resulting Jacobian, termedanalytical JacobianJA(q), is related to the geometricJacobian through[37]:

Jg(q) =[

I 0

0 T(q)

]JA(q), (5)

whereT(q) is a transformation matrix that depends on the parameterization of the end-effector orientation.

2.3. Camera model

A TV camera is supposed to be mounted at the robot end-effector. Let the origin of the camera coordinate frame(end-effector frame) with respect to the robot coordinate frame beWpC = WpC(q) ∈ Rm0 with m0 = 3. Theorientation of the camera frame with respect to the robot frame is denoted asWRC = WRC(q) ∈ SO(3). The imagecaptured by the camera supplies a two-dimensional array of brightness values from a three-dimensional scene. Thisimage may undergo various types of computer processing to enhance image properties and extract image features. Itis assumed here that the image features are the projection onto the 2D image plane of 3D points in the scene space.

A perspective projection with a focal lengthλ is also assumed, as depicted inFig. 1. An object (feature) pointCpO with coordinates [Cpx Cpy

Cpz ]T ∈ R3 in the camera frame projects onto a point in the image plane withimage coordinates [u v ]T ∈ R2. The positionξ = [ u v ]T ∈ R2 of an object feature point in the image willbe referred to as animage feature point[38]. In this paper, it is assumed that the object can be characterized by aset of feature points. For sake of completeness, some preliminars concerning single and multiple feature points arerecalled below.

2.3.1. Single feature pointFollowing the notation of[20], let WpO ∈ Rm0 be the position of an object feature point expressed in robot

coordinate frame. Therefore, the relative position of this object feature located in the robot workspace, with respect

Fig. 1. Perspective projection.

54 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

to camera coordinate frame is [CpxCpy

Cpz ]T. According to the perspective projection[4], the image featurepoint depends uniquely on the object feature positionWpO and camera position and orientation, and is expressed as

ξ =[u

v

]= −α λ

Cpz

[CpxCpy

], (6)

whereα is the scaling factor in pixels/m due to camera sampling andCpz < 0. This model is also called the imagingmodel[20]. The time derivative yields

ξ = − αλ

Cpz

1 0 −CpxCpz

0 1 −CpyCpz

Cpx

Cpy

Cpz

. (7)

On the other hand, the position of the object feature point with respect to the camera frame is given byCpxCpyCpz

= CRW(q)[WpO − WpC(q)]. (8)

By invoking the general formula for velocity of a moving point in a moving frame with respect to a fixed frame[39],and considering a fixed object point, the time derivative of(8) can be expressed in terms of the camera translationaland angular velocities as[13]

CpxCpyCpz

= CRW {−WωC × (WpO − WpC(q))− WvC}. (9)

After operating, there resultsCpx

Cpy

Cpz

=

−1 0 0 0 −CpzCpy

0 −1 0 Cpz 0 −Cpx

0 0 −1 −CpyCpx 0

·

[CRW(q) 0

0 CRW(q)

][WvCWωC

], (10)

whereWvC andWωC stand for the camera’s translational and angular velocities with respect to robot frame,respectively.

The motion of the image feature point as a function of the camera velocity is obtained by substituting(10) into(7):

ξ = − αλ

Cpz

−1 0CpxCpz

CpxCpy

Cpz−Cp2

z + Cp2x

Cpz

Cpy

0 −1CpyCpz

Cp2z + Cp2

y

Cpz−Cpx

CpyCpz

−Cpx

·

[CRW(q) 0

0 CRW(q)

][WvCWωC

].

(11)

Instead of using coordinatesCpx andCpy of the object feature described in camera coordinate frame, which area priori unknown, it is usual to replace them by coordinatesu andv of the projection of such a feature point onto

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 55

the image frame. Therefore, by using(7)

ξ = Jimage(ξ,Cpz)

[CRW(q) 0

0 CRW(q)

][WvCWωC

], (12)

whereJimage(ξ,Cpz) is the so-called image Jacobian defined by[4,13]:

Jimage(ξ,Cpz)

αλ

Cpz0

u

Cpz− uv

αλ

α2λ2 + u2

αλv

0αλ

Cpz

v

Cpz−α2λ2 + v2

αλ

uv

αλ−u

. (13)

Finally, by using(4) and (5)we can expressξ in terms of robot joint velocityq as

ξ = Jimage(ξ,Cpz)

[CRW(q) 0

0 CRW(q)

]Jg(q)q

= Jimage(ξ,Cpz)

[CRW(q) 0

0 CRW(q)

][I 0

0 T(q)

]JA(q)q.

2.3.2. Multiple feature pointsIn applications to objects located in a three-dimensional space, three or more feature points are required to make

the visual servo control solvable[17,21]. The above imaging model can be extended to a static object located in therobot workspace, havingp object features points. In this case,WpO ∈ Rp·m0 is a constant vector which containsthep object feature points, and the feature image vectorξ ∈ R2p is redefined as

ξ =

u1

v1

...

up

vp

= −αλ

Cpx1Cpz1

Cpy1Cpz1

...

CpxpCpzp

CpypCpzp

∈ R2p.

Theextendedimage JacobianJimage(ξ,Cpz) ∈ R2p×m0 is given by

Jimage(ξ,Cpz) =

Jimage

([u1

v1

], Cpz1

)...

Jimage

([up

vp

], Cpzp

)

, (14)

whereCpz = [ Cpz1 Cpz2 · · · Cpzp ]T ∈ Rp.

56 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

UsingEqs. (13) and (14), the time derivative of the image feature vector can be expressed as

ξ = J(q, ξ, Cpz)q, (15)

where

J(q, ξ, Cpz) = Jimage(ξ,Cpz)

[CRW(q) 0

0 CRW(q)

][I 0

0 T(q)

]JA(q) (16)

will be called the Jacobian matrix hereafter in this paper.

2.3.3. Moving objectWhen the object moves in the robot framework, the derivative ofEq. (8)can be expressed as

Cpx

Cpy

Cpz

= CRW {−WωC × (WpO − WpC org)+ (W pO − WvC)}.

As both the camera-in-hand and the object are moving, there exists a relative velocity between each other. Thereforethe object velocity in the camera frame can be calculated as

Cpx

Cpy

Cpz

=

−1 0 0 0 −CpzoCpyo

0 −1 0 Cpzo 0 −Cpxo

0 0 −1 −CpyoCpxo 0

, (17)

CpxCpyCpz

=

[CRW(q) 0

0 CRW(q)

][WvCWωC

]+ CRW

W pO, (18)

whereWvO andWωC are the translational and angular velocity of the camera with respect to the robot frame.The movement of the feature point into the image plane as a function of the object velocity and camera velocity

is expressed by substituting(18) into (7):

ξ = − αλ

Cpzo

[mT

1

mT2

][CRW(q) 0

0 CRW(q)

]Jg(q)q, (19)

ξ = − αλ

Cpzo

1 0 −CpxoCpzo

0 1 −CpyoCpzo

CRW

W qO, (20)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 57

where

m1 =

−1

0CpxoCpzo

CpxoCpyo

Cpzo

−Cp2

zo + Cp2xo

Cpzo

Cpyo

, m2 =

0

−1CpyoCpzo

Cp2zo + Cp2

yoCpzo

−Cpxo

CpyoCpzo

−Cpxo

.

By analysing the last result inEq. (20), it can be directly concluded that

ξ = J(q, ξ, Cpz)q + JO(q, CpO)W pO, (21)

where

JO(q, CpO) = − αλ

Cpzo

1 0 −CpxoCpzo

0 1 −CpyoCpzo

CRW. (22)

A simple generalization to multiple feature points can be obtained similarly as inSection 2.3.2.

3. Adaptive controller

3.1. Problem formulation

Two cases are considered: position control for a fixed object and tracking control for a moving object.Case(a). The object does not move and a desired trajectory is given for image features in the image plane.The following assumptions are considered:

Assumption 1. The object is fixed,W pO(t) = WvO(t) = 0.

Assumption 2. There exists a joint position vectorqd such that, for a fixed object, it is possible to reach the desiredfeatures vectorξd.

Assumption 3. For a given object situationWpO, there exists a neighbourhood ofqd whereJ is invertible and,additionally,J andJ−1 are bounded.

Assumption 4. The depthCpz, i.e. the distance from the camera to the object, is available to be used by thecontroller. A practical way to obtainCpz is by using external sensors as ultrasound or additional cameras in theso-called binocular stereo approach[9].

Assumption 1reduces the control problem to a positioning one.Assumption 2ensures that the control problemis solvable.Assumption 3is required for technical reasons in the stability analysis.

Now, the position adaptive servo visual control problem can be formulated.

58 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Control problem.By consideringAssumptions 1–4, the desired features vectorξd, the initial estimates of dynamicparametersθ in Eq. (3)and a given object situationCpO, find a control law

τ = T(q, q, ξ, θ) (23)

and a parameter update-law

dt= �(q, q, ξ, θ, t) (24)

such that the control error in the image planeξ(t) = (ξd − ξ(t)) → 0 ast → ∞.Case(b). The object moves along an unknown path.The following assumptions are considered:

Assumption 1. The object moves along a smooth trajectory with bounded velocity

W pO(t) = WvO(t)

and acceleration dWvO(t)/dt = WaO(t).

Assumption 2. There exists a trajectory in the joint spaceqd(t) such that the vector of desired fixed featuresξd isachievable:

ξd = i(WpC(qd(t)),WpO(t)).

Assumption 3. For the target pathWpO(t), there exists a neighbourhood ofqd(t) whereJ is invertible and addi-tionally J andJ−1 are bounded.

Assumption 4. The depthCpz, i.e. the distance from the camera to the object, is available to be used by thecontroller. A practical way to obtainCpz is by using external sensors as ultrasound or additional cameras in theso-called binocular stereo approach[9].

Assumption 1establishes a practical restriction on the object trajectory.Assumption 2ensures that the controlproblem is solvable.Assumption 3is required for technical reasons in the stability analysis.

Now, the adaptive servo visual tracking control problem can be formulated.Control problem.By consideringAssumptions 1–4, the desired features vectorξd, the initial estimates of dynamic

parametersθ in (3), the initial estimates of target velocityW vO(t) and its derivative dW vO(t)/dt, find a control law:

τ = T(q, q, ξ, θ, W vO,W aO) (25)

and a parameter update-law:

dt= �(q, q, ξ, θ, W vO,W aO, t) (26)

such that the control error in the image planeξ(t) = ξd − ξ(t) is ultimately bounded by a sufficiently small ballBr.

3.2. Control and update laws

Case(a). Let us define a signalυ in the image error space:

υ = dξ

dt+ �ξ. (27)

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 59

The following control law is considered:

τ = Kυ′ + �θ (28)

with

υ′ = J−1υ, (29)

�(q, q, ξ,υ)θ = −H(q){(JT

dJ)−1�(JTdJ)q + (JT

dJ)−1(JTd J)q − d

dt[(JT

dJ)−1]υ

}+C(q, q)[(JT

dJ)−1�JTd ξ] + g(q), (30)

whereK and� are positive definite gain(n× n) and(2p× 2p)matrices,H(q), C(q, q) andg(q) are the estimatesof H, C andg, respectively. Parameterization of(28) is possible due to property 2.1.

To estimateθ, the following parameter update-law of the gradient type[40] is used:

dt= �T(q, q,υ, ξ)υ′ (31)

with a positive definite adaptation gain(m×m) matrix.Case(b). Let us define the same signalυ in the image error space as forCase(a):

υ = dξ

dt+ �ξ = −ξ + �ξ (32)

with ξ = dξ/dt = Jq + JOWvO.Target velocityWvO and its time derivative dWvO/dt can be estimated through a second order filter:

W vO = b0p

p2 + b1p+ b0

WpO(t), (33)

W aO = dW vOdt

= b0p2

p2 + b1p+ b0

WpO(t). (34)

Therefore

υ′ = −dξ

dt+ �ξ (35)

with

dt= Jq + JOW vO. (36)

Now, the following control law is proposed:

τ = Kυ′ + �θ (37)

with

υ′ = J−1υ = −q − J−1JOCvO + J−1�ξ, (38)

�(q, υ, W vO,W ˙vO)θ = H(q){J−1υ − J−1Jq − J−1JOW vO − J−1JOW ˙vO − J−1JO�Jq − J−1�JOW vO}+C(q, q){−J−1JOW vO + J−1�ξ} + g(q), (39)

whereK and� are positive definite gain(n× n) and(2p× 2p)matrices,H(q), C(q, q) andg(q) are the estimatesof H(q), C(q, q) andg(q). Parameterization of(37) is possible due to property 2.1.

60 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

To estimateθ, the following parameter update-law is considered:

dt= �T(q, υ, W vO,W aO)υ

′ − Lθ (40)

with andL, positive definite adaptation gain(m×m) matrices.

4. Stability analysis

In this section, two propositions describe the stability properties of the adaptive controllers proposed inSection 3.

First, the following technical lemma is considered.

Lemma 1. Let the transfer functionH ∈ R(s)n×n be exponentially stable and strictly proper. Letu andy be itsinput and output, respectively. Ifu ∈ Ln2 ∩ Ln∞ theny, y ∈ Ln2 ∩ Ln∞ andy → 0 ast → ∞.

Lemma 1shown in [33], implies that the filtering by an exponentially stable and strictly proper filter of asquare-integrable and bounded function there results not only in a square-integrable and bounded function, but inmaintaining this property for its time derivative. Besides, it leads the output to converge to zero.

Case (a). Let us consider the control law(28)and update law(31) in closed loop with the robot and camera models(1) and (6), as well as Assumptions 1–4 for Case (a). Then, there exists a neighbourhood ofqd(t) such that:

(a) θ = θ − θ ∈ Lm∞.

(b) υ ∈ L2p∞ ∩ L

2p2 .

(c) ξ(t) = (ξd − ξ) → 0 ast → ∞.

Proof. The closed-loop system is obtained by combining(1) and (28):

KJ−1υ + �θ = Hq + Cq + g. (41)

By usingθ(t) = θ − θ(t) andEqs. (29) and (31)we obtain

Kυ′ + Hυ′ + Cυ′ = �θ. (42)

Let us consider the local non-negative function of time:

V = 12υTJ−THJ−1υ + 1

2 θT−1θ, (43)

whose time derivative along the trajectories of(42) is

V = 12υ′THυ′ + 1

2 θT−1θ, (44)

V = υTJ−T [−KJ−1υ + �θ − CJ−1υ] + 1

2υTJ−T HJ−1υ + θ

T−1 dθ

dt. (45)

By regarding property 2.1 and the parameter update-law ofEq. (31), it results

V = −υTJ−TKJ−1υ ≤ 0. (46)

Eqs. (31) and (46)imply θ ∈ Lm∞ andυ ∈ L2p∞ . By time-integratingV , it can also be easily shown thatυ ∈ L

2p∞ .

Finally, to prove (c), we note thatυ = (dξ/dt) + �ξ. By regardingξ(t) the output of an exponentially stable andstrictly proper linear filter with inputυ, Lemma 1allows to conclude thatξ(t) → 0 ast → ∞.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 61

Remark 1. If more features than DOF of the robot are taken, a non-square Jacobian matrix is obtained. In this casea re-definition ofυ as

υ = d(JTξ)

dt+ �(JTξ) (47)

should be used. A similar reasoning as that of proposition 2.1, enables to reach the same conclusions about controlsystem stability.

Case (b). Let us consider the control law(37)and update law(40)in closed loop with the robot and camera models(1) and (6), as well as Assumptions 1–4 for Case (b). Then, there exists a neighborhood ofqd(t) such that:

(a) θ = θ − θ ∈ Lm∞.(b) υ

′ ∈ Ln∞.(c) ξ(t) = ξd − ξ is ultimately bounded.

Proof. The closed-loop system is obtained by combining(1) and (37):

Kυ′ + �θ = Hq + Cq + g. (48)

Using θ = θ − θ andEqs. (38) and (39)it is obtained:

Kυ′ + HDυ′ + Cυ

′ = �θ, (49)

whereDυ′ is the estimate ofυ′ time-derivative. Also,Dυ′ = Dυ′+ε′, withε′ = J−1�JO(CvO−CvO),CvO−CvO =

εO the estimate error, andDυ′ the time-derivative ofυ′.

Then

HDυ′ = −(K + C)υ′ + �θ − ε, (50)

whereε = Hε′.Let us consider the local non-negative function of time:

V = 12υ

′THυ′ + 1

2 θT−1θ, (51)

whose time-derivative along the trajectories of(50), and considering as well the parameter update-law(40), is

V = υ′T[−(K + C)υ′ + �θ − ε] + 1

2υ′THυ

′ + θT[−�Tυ

′ + Lθ]. (52)

By regarding property 2.1, there results

V = −υ′TKυ

T − θT−1θ − υ

′Tε + θ

T−1Lθ. (53)

By defining the following expressions:

µK = σmin(K), µΓ−1L = σmin(−1L), γΓ−1L = σmax(

−1L), (54)

whereσi =√λi(ATA) denotes singular values ofA for i : min,max:

V ≤ −µK‖υ′‖2 − µΓ−1L‖θ‖2 + ‖υ′T‖ ‖ε‖ + γΓ−1L‖θ‖ ‖θ‖. (55)

From the expressions:(1

ζ‖θ‖ − ζ‖θ‖

)2

= 1

ζ2‖θ‖2 − 2‖θ‖ ‖θ‖ + ζ2‖θ‖2,

62 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78(1

η‖υ′T‖ − η‖ε‖

)2

= 1

η2‖υ′T‖2 − 2‖υ′T‖ ‖ε‖ + η2‖ε‖2

with ζ, η ∈ R+, it can be written:

‖θ‖ ‖θ‖ = 1

2ζ2‖θ‖2 + ζ2

2‖θ‖2 − 1

2

(1

ζ‖θ‖ − ζ‖θ‖

)2

,

‖υ′T‖ ‖ε‖ + η2‖ε‖2 = 1

2η2‖υ′T‖2 + η2

2‖ε‖2 − 1

2

(1

η‖υ′T‖ − η‖ε‖

)2

.

By neglecting the negative terms we can obtain the following equations:

‖θ‖ ‖θ‖ ≤ 1

2ζ2‖θ‖2 + ζ2

2‖θ‖2, ‖υ′T‖ ‖ε‖ + η2‖ε‖2 ≤ 1

2η2‖υ′T‖2 + η2

2‖ε‖2. (56)

Now, going back toV :

V ≤ −(

µK − 1

2η2

)‖υ′T‖2 −

(µΓ−1L − γΓ−1L

2ζ2

)‖θ‖2 + γΓ−1L

ζ2

2‖θ‖2 + η2

2‖ε‖2 (57)

that can be expressed as

V ≤ −α1‖υ′T‖2 − α2‖θ‖2 + ρ, (58)

where

α1 = µK − 1

2η2> 0, α2 = µΓ−1L − γΓ−1L

2ζ2> 0, ρ = γΓ−1L

ζ2

2‖θ‖2 + η2

2‖ε‖2. (59)

Eq. (51)can be stated as

V ≤ β1‖υ′T‖2 + β2‖θ‖2, (60)

whereβ1 = (1/2)γH, β2 = γ−1, γH = supq[σmax(H)], γ−1 = σmax(−1).

Then

V ≤ −δV + ρ (61)

with

δ = min

{α1

β1,α2

β2

}.

Sinceρ is bounded,(61) implies thatυ′ ∈ Ln∞, θ ∈ Lm∞ andx = (υ′, θ)T is ultimately bounded inside a ballB,

which proves (a) and (b).In addition, from(38), υ = Jυ

′ and by recalling Assumption 2,υ ∈ L2p∞ . Besides,υ can be expressed in terms

of υ as

υ = dξ

dt+ �ξ + JO(vO − vO) = υ + JOεO. (62)

SinceJOεO is bounded, it means thatυ = (dξ/dt) + �ξ is ultimately bounded as well. From the last equation,ξ = O where,O is a linear operator with finite gain. Therefore

‖ξ‖ ≤ ‖O‖‖υ‖and sinceυ is ultimately bounded,ξ is also ultimately bounded, which proves (c).

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 63

Remark 1. If more features than DOF of the robot are regarded, a non-square Jacobian matrix is obtained. In thiscase a re-definition ofυ as

υ = d(JTξ)

dt+ �(JTξ) (63)

should be used. By reasoning just like in Proposition 2, it is possible to reach the same conclusions on control systembehaviour.

5. Simulations

Computer simulations have been carried out to show the stability and performance of the proposed adaptivecontrollers. The robot used for the simulations is a two DOF manipulator, as shown inFig. 2.

The meaning and numerical values of symbols inFig. 2are listed inTable 1.The elementsHij (q)(i, j = 1,2) of inertia matrixH are

H11(q) = m1l2c1 +m2(l

21 + l2c2 + 2l1lc2 cos(q2))+ I1 + I2, H12(q) = m2(l

2c2 + l1lc2 cos(q2))+ I2,

H21(q) = m2(l2c2 + l1lc2 cos(q2))+ I2, H22(q) = m2l

2c2 + I2.

Fig. 2. Two DOF manipulator scheme.

Table 1Parameters of the manipulator

Description Notation Value

Length of link 1 (m) l1 0.45Length of link 2 (m) l2 0.55Center of gravity ofl1 (m) lc1 0.091Center of gravity ofl2 (m) lc2 0.105Mass ofl1 (kg) m1 23.9Mass ofl2 + camera (kg) m2 4.44Inertia ofl1 (kg m2) I1 1.27Inertia ofl2 + camera (kg m2) I2 0.24Acceleration of gravity (m/s2) g 9.8

64 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

The elementsCij (q, q)(i, j = 1,2) of the centrifugal and Coriolis matrixC are

C11(q, q) = −m2l1lc2 sin(q2)q2, C12(q, q) = −m2l1lc2 sin(q2)(q1 + q2),

C21(q, q) = m2l1lc2 sin(q2)q1, C22(q, q) = 0.

Table 2Parameters of the camera

Description Notation Value

Focal length (m) λ 0.008Scale factor (pixels/m) α 72727

Fig. 3. Trajectory in the image plane.

Fig. 4. Trajectory in the robot workspace.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 65

The entries of gravitational torque vectorg are given by

g1(q) = (m1lc1 +m2l1)g sin(q1)+m2lc2g sin(q1 + q2), g2(q) = m2lc2g sin(q1 + q2).

Numerical values for the camera model are listed inTable 2.All constants, design parameters and variables in the control system are expressed in the International System of

Units (SI).Linear parameterization ofEqs. (31) and (39)leads to the parameter vector:

θ = [m1l

2c1 m1lc1 m2l

2c2 m2lc2 m2 I1 I2

]T.

For controller design, it is assumed that the values of parameters of link1 (m1l2c1, m1lc1, I1) are known with

uncertainties of about 10% and for link2 (m2l2c2,m2lc2,m2, I2) with uncertainty of about 20%.

Fig. 5. Evolution of control errors.

66 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

5.1. Case (a)—adaptive position control

Simulations are carried out using the following design parameters

� = diag{5,1.8}, K = diag{50,5}, = diag{0.7, . . . ,0.7}.Robot initial conditions are

q1(0) = 30◦, q2(0) = 45◦, q1(0) = 0, q2(0) = 0

and initial estimates of vectorθ are

m1l2c1(0) = 0.264, m1lc1(0) = 2.632, m2l

2c2(0) = 0.0846,

m2lc2(0) = 0.671, m2(0) = 5.328, I1(0) = 1.397, I2(0) = 0.288.

Fig. 6. Evolution of parameter estimates.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 67

Fig. 7. Trajectory in the image plane.

The object feature point was placed at

WpO = [0.5 −0.2382 −0.74

]T.

Simulations were carried out in two stages. In the first stage, we consider the adaptive controller with uncertainty inrobot dynamics parameters. The second stage presents the non-adaptive control with wrong estimates of dynamicparameters, which were set at the same values of initial estimates in the adaptive controller.

Simulation results are shown inFigs. 3–6. Fig. 3shows the image feature trajectories on the image plane for theadaptive and non-adaptive controllers.Fig. 4 represents the trajectory of the manipulator’s end-effector, again forthe adaptive and the non-adaptive cases.Fig. 5 presents the evolution of the control errors. It is clearly seen fromthe above figures that the adaptive controller achieves a better control performance compared to the non-adaptiveone. For the adaptive case, the control errors tend to zero, while for the non-adaptive case, the controller is unable

Fig. 8. Trajectory in the robot workspace.

68 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 9. CoordinateWx in the work plane.

to eliminate the steady state errors. By analyzingFig. 6which represent the evolution of parameter estimates, it canbe concluded that for the involved signals, the proposed controller does not present parametric convergence, i.e.θ

does not converge to zero ast → ∞.

5.2. Case (b)—tracking adaptive control

Simulation conditions are the same as for Case (a). It is considered that a point object moves within themanipulator’s environment by describing a circular trajectory of radior = 0.2 m and angular speedω = 1.57 rad/s.Parameters of the speed estimating filter are selected asb0 = 104, b1 = 200, (33) and (34). Simulations werecarried out considering the adaptive and non-adaptive cases to obtain comparative performance results, which areshown inFigs. 7–12. Fig. 7shows the trajectory of image characteristics during the tracking process.Fig. 8, on theother hand, shows the trajectory of the manipulator’s end-effector in the robot frame. For the adaptive case, initial

Fig. 10. CoordinateWy in the work plane.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 69

Fig. 11. Evolution of control errors.

estimates (θO) of the parameters are taken equal to the fixed wrong parameters of the non-adaptive case. For a betterdisplay of the adaptive controller’s performance,Figs. 9 and 10present coordinatesWx andWy of the manipulatorand the object trajectories. Here, it is clearly seen the good tracking performance for the example considered inthe simulation. Control errors are explicitly shown inFig. 11. Finally, the evolution of the estimates of dynamicparameters is presented inFig. 12. From the above figures it can be noted the improvement in the manipulator’sperformance when the adaptive controller is used, as compared to the fixed controller. For the adaptive case, thecontrol errors enter and remain into a smaller neighbourhood of the ideal zero control error.

6. Discretization and measurement noise effects

In the previous section, a tracking adaptive servo-visual control algorithm in the continuous domain has beenproposed, and its stability analysis has been done. The application feasibility of the proposed algorithm on a computersystem motivates its discretization.

70 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 12. Evolution of parameter estimates.

In this section, the discretization of the control law and the update law are outlaid, for their digital implementation.Besides, through computer simulations, the performance of the proposed control algorithm with several samplingtimes and measurement and discretization noises is evaluated. The proposed scheme has two feedback loops withdifferent sampling times. The first one (T1) is the fast dynamics loop, and is in charge of controlling the manipulatorusing the joint position and velocities measurement. The second loop (T2), with slower dynamics, computes andestimates the velocity of the moving object based on the images from a video camera, setting the tracking referencesfor T1 loop.

The discretization is obtained as follows:dx

dt∼= xk − xk−1

T, (64)

whereT is the sampling period.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 71

Table 3Performance for different sampling times

No. T1 (s) T2 (s)∫ ‖ξ‖ dt ‖ξ‖final

∫ ‖θ‖ dt

1 0.0025 0.0025 517.02 3.88 138.012 0.0025 0.0250 454.84 3.45 120.913 0.0025 0.0500 376.22 3.81 100.64 0.0025 0.1000 362.24 4.74 90.735 0.0040 0.0500 568.2 3.84 151.536 0.0150 0.0500 982.56 22.67 229.74

Fig. 13. Trajectory in the robot workspace. Simulations 1, 3, 4 and 6 ofTable 3.

72 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

The discrete equation of both the control law and the parameter update-law for the adaptive controller are

τkT1 = Kν′kT1

+ φkT1θkT1 (65)

with

�kT1(qkT1, νkT1,W vOkT2

, W ˙vOkT2)θkT1

= H(qkT1){J−1νkT1 − J−1JqkT1 − J−1JO : W vOkT2− J−1JO : W ˙vOkT2

− J−1JO�JqkT1

−J−1�JO : W vOkT2} + C(qkT1, qkT1){−J−1JO : W vOkT2

+ J−1�ξkT2} + g(qkT1), (66)

θkT1 = (I − T1L)θ(k−1)T1 + T1φTkT1

ν′kT1, (67)

Fig. 14. Norms of the control errors. Simulations 1, 3, 4 and 6 ofTable 3.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 73

ν′kT1

= J−1νkT1 = −qkT1 − J−1JOvOkT2+ J−1�ξkT2

. (68)

The filter equations for the estimate of the object velocity and acceleration are

vOkT2=(

I

T 22

+ b1

T2+ bO

)−1[bOT2(WpOkT2

− WpO(k−1)T2)+

(2I

T 22

+ b1

T2

)vO(k−1)T2

− I

T 22

vO(k−2)T2

], (69)

˙vOkT2=(

I

T 22

+ a1

T2+ aO

)−1

×[

aOT2(WpOkT2

− 2WpO(k−1)T2+ WpO(k−2)T2

)+(

2I

T 22

+ a1

T2

)˙vO(k−1)T2

− I

T 22

˙vO(k−2)T2

]. (70)

Several simulations have been carried out by considering different sampling times and measurement noises toevaluate the performance of the developed discrete controller. The simulation conditions are the same as those ofthe continuous case (seeSection 5.2).

Let us consider different sampling times both for the faster and slower dynamic loops. The following gain matriceswere selected:

� = diag{20,20}, K = diag{40,40}, = diag{0.2, . . . ,0.2}, L = diag{0.08, . . . ,0.08}.In the simulations, it is considered the case where the point object moves in the manipulator environment describinga circular trajectory, with the same velocity and trajectory radius considered for the continuous case. The filterparameters for the velocity and acceleration estimation wereb0 = 104, b1 = 200,a0 = 104 anda1 = 200.

Table 3shows the different sampling times used for the various simulation conditions and the results obtainedbased on three error indexes. The evaluated indexes are the integral of the control error norm, the error norm oncethe stationary state is reached, and the integral of the norm of the manipulator dynamic parameters. InTable 3, T1represents the sampling time of the faster dynamics loop andT2 is the vision loop sampling time.Figs. 13 and 14show the trajectory of the robot and the norm of the control errors for some simulation conditions ofTable 3.

A second simulation was performed to determine the influence of perturbations in the closed-loop system due tomeasurement and sensing errors. Besides, non-modelled robot dynamics was also assumed.

In this last experience, the same simulation conditions of the previous ones were considered, i.e. the initial andfinal manipulator positions, and initial uncertainties of the parameters. It was also considered a real sampling timefor the evaluation, i.e.T1 = 0.0025 s andT2 = 0.05 s.

Table 4Performance for different measurement noises

Order Q (b) ∇q1 (σ21) ∇q2 (σ2

2) q1, q2∫ ‖ξ‖ dt ‖ξ‖final

∫ ‖θ‖ dt

m σ2

1 12 3.14× 10−12 7.66× 10−12 0 0.05 422.05 9.39 119.222 6 3.14× 10−12 7.66× 10−12 0 0.05 428.88 8.94 121.83 4 3.14× 10−12 7.66× 10−12 0 0.05 867.13 8.17 243.094 12 0.00042 0.00042 0 0.05 1055.8 51.6 111.955 12 3.14× 10−12 7.66× 10−12 0 0.25 1906.0 21.24 494.236 12 3.14× 10−12 7.66× 10−12 0 0.5 3599.2 52.2 874.737 12 3.14× 10−12 7.66× 10−12 0.5 0.05 1023.9 8.78 331.158 12 3.14× 10−12 7.66× 10−12 1 0.05 2293.7 24.5 394.569 12 3.14× 10−12 7.66× 10−12 10 0.05 5694.5 125.64 171.24

10 6 0.00042 0.00042 1 0.5 1781.7 60.4 1083.2

74 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 15. (a) Trajectory in the robot workspace; (b) error norm. Simulation 1 ofTable 4.

The controller tuning was done using the same gain matrices of previous sections, because they guarantee anacceptable performance for the given conditions.

Various cases were considered regarding the control system performance against perturbations, as shown inTable 4. These cases are:

• Quantization noise. It arises when considering certain number of bits in the image discretization process (seeTable 4, rows 1–3). It should be concluded that the image discretization process has a poor influence on the systembehaviour.

• Measurement noise introduced by the optical encoders. In this case, a noise with zero mean and different variancesfor each joint actuator are considered (see rows 1 and 4,Table 4). The real values for the noise introduced by optical

Fig. 16. (a) Trajectory in the robot workspace; (b) error norm. Simulation 4 ofTable 4.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 75

Fig. 17. (a) Trajectory in the robot workspace; (b) error norm. Simulation 9 ofTable 4.

encoders do not affect the system performance, but when the noise is high enough, an important degradation inthe control objective can be noted.

• Velocity measurement noise due to the tachometer. This case is obtained by assuming Gaussian noise with differentmean and variance values (rows 1, 5–9,Table 4). The degradation in system behaviour is remarkable when themean value of the noise increases. It can be seen that the system, under the above mentioned conditions, tends tobe unstable.

• Worst case. Finally, last row ofTable 4(row 10) considers the worst case and as it was expected, the systemperformance is poor.

Figs. 15–19show the results for the simulations and conditions ofTable 4. Figs. 15–18show simulation resultsfor conditions of cases 1, 4, 9 and 10 ofTable 4. Curves of figure (a) represent the evolution of the manipulator’s

Fig. 18. (a) Trajectory in the robot workspace; (b) error norm. Simulation 10 ofTable 4.

76 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Fig. 19. (a) and (b) Norm of the parameters vector of the manipulator. Simulations 1, 4, 9 and 10 ofTable 4.

end-effector and curve (b) the norm control error. Finally,Fig. 19depicts the norm of the parameter vector estimate.Curve (a) shows this norm for cases 1 and 4, and curve (b) the same for cases 9 and 10.

7. Conclusions

This paper has presented a positioning and a tracking adaptive controller for robots with camera-in-hand config-uration using direct visual feedback. Full non-linear robot dynamics has been considered in the controller design.Control errors are proven to asymptotically converge to zero for the positioning controller and be ultimately boundedfor the tracking one. The work has been focused on the control problem, without considering the real-time imageprocessing problem, which is assumed as already solved. Simulations illustrate the capability of the proposedcontrollers to attain suitable control performance under robot dynamics uncertainties.

References

[1] K. Hashimoto, Visual servoing-real-time control of robot manipulators based on visual sensory feedback, in: K. Hashimoto (Ed.), VisualServoing, World Scientific, Singapore, 1994.

[2] C.P., Visual Control of Robots, Research Studies Press Ltd., 1996.[3] R. Hutchinson, G.D. Hager, P. Corke, A tutorial on visual servo control, IEEE Transactions on Robotics and Automation 12 (1996) 651–

670.[4] P. Corke, M. Good, Dynamic effects in visual closed-loop systems, IEEE Transactions on Robotics and Automation 12 (1996) 671–683.[5] All, Special issue on visual servoing, IEEE Robotics and Automation Magazine, 5 (1996).[6] P. Corke, S. Hutchinson, Real-time vision, tracking and control, in: Proceedings of the IEEE International Conference on Robotics and

Automation, San Francisco, CA, April 2000.[7] P. Allen, A. Tomcenko, B. Yoshimi, P. Michelman, Automated tracking and grasping of a moving object with a robotic hand–eye system,

IEEE Transactions on Robotics and Automation 9 (1993) 152–165.[8] G.D. Hager, W.C. Chang, A.S. Morse, Robot hand–eye coordination based on stereo vision, IEEE Control System Magazine 15 (1995)

30–39.[9] G.D. Hager, A modular system for robust positioning using feedback from stereo vision, IEEE Transacations on Robotics and Automation

13 (1997) 582–595.

O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78 77

[10] R. Kelly, Robust asymptotically stable visual servoing of planar robots, IEEE Transactions on Robotics and Automation 12 (1996) 759–766.

[11] L.E. Wiess, A.C. Sanderson, C.P. Newman, Dynamic sensor-based control of robots with visual feedback, IEEE Journal of Robotics andAutomation 3 (1987) 404–417.

[12] F. Chaumette, P. Rives, B. Espiau, Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing,in: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, April 1991, pp. 2248–2253.

[13] K. Hashimoto, T. Kimoto, T. Ebine, H. Kimura, Manipulator control with image-based visual servoing, in: Proceedings of the IEEEinternational Conference on Robotics and Automation, Sacramento, CA, June 1991, pp. 2267–2272.

[14] W. Jang, Z. Bien, Feature based visual servoing of an eye-in-hand robot with improved tracking performance, in: Proceedings of the IEEEInternational Conference on Robotics and Automation, Sacramento, CA, April 1991, pp. 2254–2260.

[15] B. Espiau, F. Chaumette, P. Rives, A new approach to visual servoing in robotics, IEEE Transactions on Robotics and Automation 8 (1992)313–326.

[16] H. Hashimoto, T. Kubota, M. Sato, F. Harashima, Visual control of robotics manipulator based on neural networks, IEEE Transactions onIndustrial Electronics 9 (1992) 490–496.

[17] F. Chaumette, A. Santos, Tracking a moving object by visual servoing, in: Proceedings of the IFAC World Congress, vol. 9, Sydney, 1993,pp. 409–414.

[18] N.P. Papanikolopoulos, P.K. Khosla, T. Kanade, Visual tracking of a moving target by a camera mounted on a robot: a combination ofcontrol and vision, IEEE Transactions on Robotics and Automation 9 (1993).

[19] N.P. Papanikolopoulos, P.K. Khosla, Adaptive robotic visual tracking: theory and experiments, IEEE Transactions on Automatic Control38 (1993) 429–445.

[20] K. Hashimoto, H. Kimura, Dynamic visual servoing with non-linear model-based control, in: Proceedings of the IFAC World Congress,vol. 9, Sydney, Australia, June 1993, pp. 405–408.

[21] K. Hashimoto, T. Ebine, H. Kimura, Visual servoing with hand–eye manipulator—optimal control approach, IEEE Transactions on Roboticand Automation 12 (1996) 766–774.

[22] A. Astolfi, L. Hsu, M. Netto, R. Ortega, A solution to the adaptive visual servoing problem, in: Proceedings of the IEEE InternationalConference on Robotics and Automation, vol. 1, May 21–26, 2001, pp. 743–748.

[23] E. Malis, Visual servoing invariant to changes in camera intrinsic parameters, in: Proceedings of the Eighth IEEE International Conferenceon Computer Vision, vol. 1, July 7–14, 2001, pp. 704–709.

[24] M. Asada, T. Tanaka, K. Hosoda, Adaptive binocular visual servoing for independently moving target tracking, in: Proceedings of the IEEEInternational Conference on Robotics and Automation, San Francisco, CA, April 2000.

[25] E. Zergeroglu, D. Dawson, Y. Fang, A. Malatpure, Adaptive camera calibration control of planar robot: elimination of camera spacevelocity measurements, in: Proceedings of the IEEE International Conference on Control Applications, September 25–27, 2000, pp. 560–565.

[26] C. Cheah, K. Lee, S. Kawamura, S. Arimoto, Asymptotic stability of robot control with approximate Jacobian matrix and its applicationto visual servoing, in: Proceedings of 39th IEEE Conference on Decision and Control, vol. 4, December 12–15, 2000, pp. 3939–3944.

[27] E. Zergeroglu, D. Dawson, M. de Queiroz, S. Nagarkatti, Robust visual-servo control of robot manipulators in the presence of uncertainty,in: Proceedings of the IEEE 38th Conference on Decision and Control, vol. 4, December 7–10, 1999, pp. 4137–4142.

[28] A. Maruyama, M. Fujita, Robust visual servo control for planar manipulators with the eye-in-hand configurations, in: Proceedings of theIEEE 36th Conference on Decision and Control, vol. 3, December 10–12, 1997, pp. 2551–2552.

[29] L. Hsu, P. Aquino, Adaptive visual tracking with uncertain manipulator dynamics and uncalibrated camera, in: Proceedings of the IEEE38th Conference on Decision and Control, vol. 2, December 7–10, 1999, pp. 1248–1253.

[30] L. Hsu, R. Costa, P. Aquino, Stable adaptive visual servoing for moving targets, in: Proceedings of 2000 American Control Conference,vol. 3, June 28–30, 2000, pp. 2008–2012.

[31] R. Carelli, O. Nasisi, B. Kuchen, Adaptive robot control with visual feedback, in: Proceedings of the American Control Conference,Baltimore, MD, June 1994.

[32] O. Nasisi, R. Carelli, B. Kuchen, Tracking adaptive control of robots with visual feedback, in: Proceedings of the 13th IFAC World Congress,San Francisco, USA, June 1996, pp. 265–270.

[33] J. Slotine, N. Li, Adaptive manipulator control: a case of study, in: Proceedings of the IEEE International Conference on Robotics andAutomation, Raleigh, NC, April 1987.

[34] K. Narendra, A. Annaswamy, Stable Adaptive Systems, Prentice-Hall, Englewood Cliffs, NJ, 1989.[35] M. Spong, M. Vidyasagar, Robot Dynamics and Control, Wiley, New York, 1989.[36] R. Ortega, M. Spong, Adaptive motion control of rigid robots: a tutorial, Automatica 25 (6) (1989) 877–888.[37] L. Sciavicco, B. Sciciliano, Modeling and Control of Robot Manipulators, McGraw-Hill, New York, 1996.[38] J. Feddema, C. Lee, O.R. Mitchell, Weighted selection of image features for resolved rate visual feedback control, IEEE Transactions on

Robotics and Automation 7 (1991) 31–47.[39] J.J. Craig, Introduction to Robotics Mechanics and Control, Addison-Wesley, Reading, MA, 1986.[40] S. Sastry, M. Bodson, Adaptive Control: Stability, Convergence and Robustness, Prentice-Hall, New York, 1989.

78 O. Nasisi, R. Carelli / Robotics and Autonomous Systems 43 (2003) 51–78

Oscar Nasisi was born in San Luis, Argentina, in 1961. He received the Electronics Engineering degree from the NationalUniversity of San Juan, Argentina, the M.S. degree in Electronics Engineering from the National Universities Foundationfor International Cooperation, Eindhoven, The Netherlands, and the Ph.D. degree from the National University of SanJuan in 1986, 1989, and 1998, respectively.Since 1986, he has been with the Instituto de Automática, National University of San Juan, where he currently is a FullProfessor. His research areas of interest are artificial vision, robotics, and adaptive control.

Ricardo Carelli was born in San Juan, Argentina. He graduated in Engineering from the National University ofSan Juan, Argentina, and obtained a Ph.D degree in Electrical Engineering from the National University of Mexico(UNAM). He is presently Full Professor at the National University of San Juan and Senior Researcher of the NationalCouncil for Scientific and Technical Research (CONICET, Argentina). He is Adjunct Director of the Instituto deAutomática, National University of San Juan. His research interests are in robotics, manufacturing systems, adaptivecontrol and artificial intelligence applied to automatic control. Prof. Carelli is a Senior Member of IEEE and a Memberof AADECA-IFAC.


Recommended