+ All Categories
Home > Documents > Direct neural control system: nonlinear extension of adaptive control

Direct neural control system: nonlinear extension of adaptive control

Date post: 20-Sep-2016
Category:
Upload: gs
View: 216 times
Download: 1 times
Share this document with a friend
7
Direct neural control system: Nonlinear extension of adaptive control M.Yuan A.N.Poo G.S. Hong Indexing terms: Adaptive control, Neural networks, Nonlinear systems Abstract: The methodology of design of a conventional model-reference-adaptive-control system is extended to design a direct neural control for a class of nonlinear system with structural uncertainty. A structured feedforward neural network, a Sigmoid-linear network, is used as the controller, which can be interpreted as a nonlinear extension of the conventional adaptive control. Without a specific pretraining stage, the weights of the neural network are adjusted online to minimise the error between the plant output and the desired output signal, according to a learning law derived in light of gradient-descent method. The local stability can be achieved provided that proper conditions are satisfied for the system. Simulation studies are carried out for linear and nonlinear plants, respectively, and verify the applicability of the proposed control strategy. 1 Introduction The control theory for linear time-invariant systems has been well developed, for instance, state feedback control using LQG techniques [l]. For a system with slow time-varying property, the adaptive control tech- nique has proved to be a sensible solution [2]. How- ever, for a nonlinear system, control-system design is typically handled on a case-by-case basis. Feedback lin- earisation is a popular choice for deterministic systems [3]. However, feedback linearisation implies a model- based control strategy, in which its robustness is inher- ently sensitive to modelling accuracy [4,S]. In recent years, incorporation of neural networks to adaptive- control-system design has been claimed to be a new method for control of systems with significant nonline- arity, especially for cases where the plant nonlinearity is unknown [6-9]. Those neural-network-based control systems so far developed are generally classified as indirect or direct, motivated by the indirect and direct method in adap- tive control systems. Narendra 1101 has summarised 0 IEE, 1995 Paper first received 31st October 1994 and in revised form 13th June 1995 The authors are with the Department of Mechanical & Production Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 051 1 IEE Proceediqys online no 19952122 versatile methodologies of design of indirect neural-net- work control systems. The neural network was trained first to attain the same dynamic behaviour as the plant to be controlled. Then a controller was designed by using the neural network’s output to cancel the nonlin- ear part of the plant. ThLe stability issue for indirect neural-network control systems has been analysed the- oretically by other workers [l l-141 and benefited from the well established approximation theory of feedfor- ward neural networks [lS, 161. However, indirect-neural network control requires a complex system structure and a long training procedure. In contrast to indirect control, in the direct neural-network control system, the neural network works as a controller. Robustness can be achieved if proper convergence conditions are satisfied for the learning process of the neural network. Yabuta [17] and Khalid [18] have successfully applied a kind of neural network, with an adjustable coefficient in its sigmoid activation function, in the direct control of a force-control servomechanism and a temperature- control system, respectively. In this paper, we propose a direct neural controller which is motivated from the conventional model-refer- ence adaptive control (MRAC). In the proposed direct neural control system (DNCS), a structured feedfor- ward neural network (Sigmoid-linear network) is used to replace the conventional adaptive controller for a class of nonlinear plant, to track the desired trajectory yd. The network parameters (say, weights) are updated online according to a learning law which is derived directly in the light of gradient-descent method, inspired by gradient method in MRAC. Yd - + updating law U Y * Fig. 1 Model-reference adaptive system 2 Model-reference adaptive control (MRAC) The model reference adaptive control [2] is one of the main approaches in adaptive control. The basic 661 IEE Proc-Control Theory Appl., Vol. 142, No. 6, November 1995
Transcript
Page 1: Direct neural control system: nonlinear extension of adaptive control

Direct neural control system: Nonlinear extension of adaptive control

M.Yuan A.N.Poo G.S. Hong

Indexing terms: Adaptive control, Neural networks, Nonlinear systems

Abstract: The methodology of design of a conventional model-reference-adaptive-control system is extended to design a direct neural control for a class of nonlinear system with structural uncertainty. A structured feedforward neural network, a Sigmoid-linear network, is used as the controller, which can be interpreted as a nonlinear extension of the conventional adaptive control. Without a specific pretraining stage, the weights of the neural network are adjusted online to minimise the error between the plant output and the desired output signal, according to a learning law derived in light of gradient-descent method. The local stability can be achieved provided that proper conditions are satisfied for the system. Simulation studies are carried out for linear and nonlinear plants, respectively, and verify the applicability of the proposed control strategy.

1 Introduction

The control theory for linear time-invariant systems has been well developed, for instance, state feedback control using LQG techniques [l]. For a system with slow time-varying property, the adaptive control tech- nique has proved to be a sensible solution [2]. How- ever, for a nonlinear system, control-system design is typically handled on a case-by-case basis. Feedback lin- earisation is a popular choice for deterministic systems [3]. However, feedback linearisation implies a model- based control strategy, in which its robustness is inher- ently sensitive to modelling accuracy [4,S]. In recent years, incorporation of neural networks to adaptive- control-system design has been claimed to be a new method for control of systems with significant nonline- arity, especially for cases where the plant nonlinearity is unknown [6-9].

Those neural-network-based control systems so far developed are generally classified as indirect or direct, motivated by the indirect and direct method in adap- tive control systems. Narendra 1101 has summarised 0 IEE, 1995

Paper first received 31st October 1994 and in revised form 13th June 1995 The authors are with the Department of Mechanical & Production Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 051 1

IEE Proceediqys online no 19952122

versatile methodologies of design of indirect neural-net- work control systems. The neural network was trained first to attain the same dynamic behaviour as the plant to be controlled. Then a controller was designed by using the neural network’s output to cancel the nonlin- ear part of the plant. ThLe stability issue for indirect neural-network control systems has been analysed the- oretically by other workers [l l-141 and benefited from the well established approximation theory of feedfor- ward neural networks [lS, 161. However, indirect-neural network control requires a complex system structure and a long training procedure. In contrast to indirect control, in the direct neural-network control system, the neural network works as a controller. Robustness can be achieved if proper convergence conditions are satisfied for the learning process of the neural network. Yabuta [17] and Khalid [18] have successfully applied a kind of neural network, with an adjustable coefficient in its sigmoid activation function, in the direct control of a force-control servomechanism and a temperature- control system, respectively.

In this paper, we propose a direct neural controller which is motivated from the conventional model-refer- ence adaptive control (MRAC). In the proposed direct neural control system (DNCS), a structured feedfor- ward neural network (Sigmoid-linear network) is used to replace the conventional adaptive controller for a class of nonlinear plant, to track the desired trajectory yd. The network parameters (say, weights) are updated online according to a learning law which is derived directly in the light of gradient-descent method, inspired by gradient method in MRAC.

Yd - +

updating law

U Y *

Fig. 1 Model-reference adaptive system

2 Model-reference adaptive control (MRAC)

The model reference adaptive control [2] is one of the main approaches in adaptive control. The basic

661 IEE Proc-Control Theory Appl., Vol. 142, No. 6, November 1995

Page 2: Direct neural control system: nonlinear extension of adaptive control

principle is illustrated in Fig. 1. The desired perform- ance is expressed in terms of a reference model, which gives the desired response to a command signal. The controller is the linear combination of the reference sig- nal Y and all the states of the system. The controller- parameter adaptation is based on the error between the outputs of the system and the reference model. There are many approaches to analysis and design of an MRAC system, e.g. the gradient method, Lyapunov and hyperstability design, passivity theory etc. [19]. In this section only the gradient approach is reviewed since it is the fundamental ideal to motivate the design of direct neural control system described below.

As shown in Fig. 1, assume that we attempt to adjust the parameters (wr, wy, wyl, . . . ) of the controller

u( t ) = wTr - wyy - wyly +. + . (1)

so that the error e between the output of the plant and the reference model is driven to zero. First we intro- duce the cost function to be the quadratic function of the error

(2) 1 2

2 - 2 e - -(?4-YCd2 J (wr , wy, W y l , * 5 ) = -

To make J small, it is reasonable to change the param- eters in the direction of the negative gradient of J , i.e.

0- 0 1 ... . . .

0 0 1 e ’ ’ 0

B =

0 . . . . . . 0 1

0 . . . . . , . . . 0,

dwyl d J ae - = -7- = -ve- dt dwyl d W g l

-0

0

0

-1

( 3 )

where q is a constant determining the rate of decrease. The derivatives &/dwr, aelawy, de/awyl, ..., are the sensi- tivity derivatives of the system and can be evaluated or approximated under the assumption that the parame- ters are constant as long as the parameters change much more slowly than the other variables in the sys- tem.

Originally, MRAC was developed in a linear system. For a nonlinear plant, the MRAC usually works on the initial assumption of a linearised model about a local operating point and the adaptive property of MRAC alters the controller parameters until they converge to the optimum values. However, such an approach will only work if the plant’s dynamic changes are of para- metric nature rather than structural nature. Since the structural nonlinearity can be linearised by nonlinear feedback [20], and a feedforward neural network has the capability of performing a nonlinear operation with the inner weights online updated [21], it provided the motivation to attempt a direct neural controller to compensate for the structural uncertainty and nonline- arity of the plant.

3 Direct neural controller for nonlinear plant

3. I Control formulation We consider a class of nonlinear systems [lo], which is described by the nonlinear difference equation on a

662

compact set S as follows: m

y ( k + l ) = S{y(k) ,y(k- l ) , . . . , Y(k-pr+l))+Cgzu(lc-.i) z=o

(4) where m I n, y(k) E Y c R, u(k) E U c R. u(k), y(k) represent, respectively, input and output of the system at time k, in which the control appears linearly. By choosing the state variables as

Q ( k ) = y(k - 72 + 1)

2 2 ( I C ) = y ( k - n + 2) I

the state-space representation of the nonlinear system is

2 l ( k + 1) = q ( k )

z 2 ( k + 1) = Q ( k )

with

A =

c = [0 0 . . . 0 11

where X = [xl x2 ... xnlT E Rn is the state vector. We assume that the plant described by eqn. 4 has an

equilibrium point at the origin in the sense offlo) = 0, and g, # 0, for i = 0, 1, ..., m, to ensure the controllabil- ity of the system.

Yd

Fig. 2 Direct neural control system

IEE Proc -Control Theory Appl, Vol 142, No 6, November 1995

Page 3: Direct neural control system: nonlinear extension of adaptive control

A multilayered feedforward neural network is used as the controller, as shown in Fig. 2. The input of the neural controller involves the desired output signal Yd and all the state X. Then the neural network generates the control signal u(k) to the plant. There are many variations of neural-network structures [22]. In the pro- posed DNCS, a two-layered feedforward network, named the Sigmoid-linear network, is considered (see Fig. 3). The neural network consists of only two layers, with N Sigmoid neurons in the hidden layer and one linear neuron in the output layer.

N Sigmoid neurons in hidden layer I

e linear neuron in

'd

Fig.3 Sigmoid-linear network Fig.3 Sigmoid-linear network

Hence, the control u(k) can be expressed by the trol law. Control law:

N / n

con-

u(k) = c w , + C W : J " J ( k ) + W t Y d - WO (7 ) z= 1 L l 1

where @(x) = 1/{ 1 + exp(-x)} is a sigmoid function,

w = [a W d w"] = { w t 3 }

w = [WI w2 . . . WN]T

a d [wig w; . . . W $ ] T

w" = [a: az" . . . wg]T w," = [w;l w,", . . ' w&]' (i = 1 , 2 , . . . , N )

are the weights of the neural network, and coo is the bias term satisfying

- N

wi 1

WO = - 2 z = I

(9)

to ensure that the origin is an equilibrium point. The gradient-descent method is used to derive the

learning law of the neural controller. We define the cost function J to be a quadratic function of the system error e:

(10) 1 2

~ ( k + 1) = -e(k + 1)'

where the system error e is the error between the real plant output and the desired output, i.e.

e(k + 1) = y(k + 1) - Y d (11) Directly using the gradient-descent procedure, the weights matrix W of the neural network is then adjusted in the gradient-descent direction of J. That is

Substituting eqn. 10 into eqn. 12, the learning law can be obtained as follows. Learning law:

} (13) W2J ( k + 1) = WZJ ( k ) + A W Z J ( k + 1)

A W , ~ ( k + 1) = -vJae(k + 1)CzJ

where 4, = au(k)/aw, is the sensitivity function used in the standard backpropagation,

@(wpTx(k)+w,dyd) ( j=l ; i = l , 2 , . . . , N )

= w,Qr(l-Qr)z,(k) (1 < j < n + 2 ; i = l , 2 , . . . , N ) i WiQr(1 -a)?& ( j =n+2; i= l , 2 , . . . , N )

and J, is the Jacobean of the plant.

dy(k -t 1) du(1F) = go J a - (15)

Since go is an nonzero constant, it is conveniently lumped together with the learning rate q.

3.2 Stability analysis: The overall closed-loop system is represented as

X ( k + 1) = AX(k) + B;F{X(k)}

r~

Li=1 1

J

r N

L z=1

+(-")w,dyd -(-m)

where the left superscripts (-l), ... , (-m) of weights q, w:, w,X and bias coo represent the corresponding values at previous updating steps. For simplicity in stability analysis, we set yd = 0 and a choice of 00 as defined in eqn. 9 will ensure X = 0 to be an equilibrium point of the system.

Now choosing the Lyapunov candidate V{X(k) } on the compact set S as

1 2 (17) v{x(k)} = - E ~ E

where E = X - X = X then 1

1 2

V { X ( k ) } = ZXTX

= -{y(k - n -t 1)' + y(k - n + 2)'+

...

and 1

V { X ( k + 1)) = 5{y(k - n + 2 ) 2 + y ( k - n + 3 y +

. . . + y(k + 1 ) 2 } (19) Using the chain rule, for small enough Awv, i.e. lAwiil 4 d, V d 2 0 (this can be achieved by choosing small where q is the learning rate.

IEE Proc.-Control Theory Appl., Vol. 142, No. 6, November 1995 663

Page 4: Direct neural control system: nonlinear extension of adaptive control

learning rate q, i.e. q 5 y in eqn. 13 on the compact set S), the change in the Lyapunov function V{X(k + l)} - V(X(k)} can be expressed as follows:

awij d V { X ( k + 1)) V { X ( k + 1 ) ) - V { X ( k ) }

zj dWi,

Substituting the learning law eqn. 13 into the above equation yields

V ( k + 1) - V ( k )

vx E s (21)

This implies that state X will converge to the equilib- rium point X = 0, i.e.

lim IX(S + 1)1 -+ 0 k + c c

The above derivations can be summarised as follows. Proposition: Given a controllable nonlinear system as described by eqn. 4 on the compact set S, under the assumption that X = 0 is the equilibrium point of the system, there exists a constant y such that, if the learn- ing rate q satisfies the condition q 5 y, then the’ state X converges to the equilibrium point using the direct neu- ral control law in eqn. 7, and the bias condition in eqn. 9. Remark: In this Section, we only investigate the local stability of the system around the equilibrium point. Therefore the desired output value is set to zero. Actu- ally the direct neural control system can implement nonzero set-point regulation and track other forms of signal, e.g. sine-shaped signals. Therefore in Section 4, we release the restriction yd = 0 and investigate the applicability of the proposed controller in more general cases by simulation study.

4 Simulation studies

In this Section, two examples are chosen to verify the control strategy described above. In example 1, both model-reference adaptive controller and direct neural controller are used to control a linear plant, e.g. a DC motor, and system responses are compared. Example 2 is a nonlinear system with structural uncertainty. The direct neural controller is investigated and briefly com- pared with an indirect-neural-control strategy proposed by Narendra. Furthermore, the effects of parameters in the direct neural controller, such as the number of neu- rons in the hidden layer, the learning rate, and the ini- tial weights, and the robustness of the system are discussed later. All the simulations are carried out using the MATLAB simulation package.

664

4. I Example I (linear plant) A normalised model of a DC motor is considered, in which the transfer function can be described as

b s ( s + a )

G(s ) = ~

where a, b are assumed all to be 1. Hence the relation- ship between its output position y and the control input U can be described by the differential equation

( 2 3 )

(24)

y + ay = bu For the conventional MRAC described in Section 2 [2], the controller is

U = w,r - w,y - wylj’

where wr, wy, wvl are the adaptable parameters. Then the closed-loop system is

or y + ( a + bw,l)y + bwyy = bw,r

r (25) bw,

p2 + ( a + bw,~)p + bw, !J=

where p is the differential operator. The sensitivity derivatives are obtained by taking partial differentia- tion with respect to the controller parameters

T de dY - b - - - _ _ -

dw, dw, p 2 + (U + bwyi)p + bw, de - dY - bw,

aw, dw, {p2 + (U -t bw,l)p + bw,)”‘ b

p2 + (U + bw,i)p + bw, - - -

- - bP - p 2 + (U + bw,i)p + bw,

These formulae cannot be used for unknown plant parameters a and b. Approximations are therefore required in order to obtain realisable parameter-updat- ing laws. If the reference model is chosen as

where 5 = 0.7, cum = 2, the following approximations can be obtained for perfect model following:

p 2 + ( U + bw,l)p + bw, = p2 + 2<w,p + W& ( 2 8 )

(29) wr = wy = w, 2

Furthermore, the parameter b may be absorbed in the learning speed q since it appears in the product qb. After these approximations, the following equations for updating the parameters are obtained:

d t p 2 + 25wmp +

The input command signal is a square wave with amplitude 1, and q = 1. The closed-loop response curve is shown by the broken line in Fig. 4.

IEE Proc.-Control Theory Appl., Vol. 142, No. 6, November I995

Page 5: Direct neural control system: nonlinear extension of adaptive control

The broken line in Fig. 4 shows the response for the DNCS, where the number of neurons in the hidden layer and the learning rate are N = 10, q = 0.05, respectively, and the initial weights are chosen ran- domly between (- 0.1, f 0.1). The solid line in Fig. 4 indicates the desired output yd. Comparing the response curves for model-reference adaptive controller and direct neural controller in Fig. 4, it can be seen that the direct neural control system exhibits almost same settling time as the conventional model-reference system, but that there is a much larger initial overshoot in DNCS. This large initial transient behaviour is due to the stability requirement of small learning rate (in this case, q = 0.05) of the DNCS as stated in the prop- osition. However, such a situation will be avoided with pretrained network weights. For a linear plant only without the knowledge of parameters, the model-refer- ence adaptive controller works better than the direct neural controller. However, the significant potential of the direct neural controller is its applicability to nonlin- ear systems, particularly those with structural uncer- tainty. The following example clearly illustrates the applicability of the direct neural controller to a dis- crete-time nonlinear dynamic plant, in which the non- linear part is unknown.

-8 -6 t 0 20 40 60 80 100 120

time, s Responses of model-reference adaptive system and direct neural Fig.4

system for example 1 __ N = 10, h = 0.05

h = 1 - _ _ -

4.2 Example 2 (nonlinear plant) The plant to be considered is described by the differ- ence equation

where the function

is assumed to be unknown but satisfies our assumption

This system was previously considered by Narendra [IO]. In [lo], Narendra proposed an indirect neural con- trol scheme. First a feedforward neural network is trained offline to identify the unknown part f of the plant. Next, the control input to the plant at any instant k is computed by using N.1 in place off as

off(0) = 0.

~ ( k ) = - N { y ( k ) , y(k - 1)} + 0.6y(k) + 0.2y(k - 1) + ~ ( k ) (33 )

in order to match the reference model

y m ( k + I) = 0.6ym(k) + 0.2ym(k - 1) + ~ ( k ) (34)

IEE Proc.-Control Theory Appl,, Vol. 142, No. 6 , November 1995

If the identification is suflhciently accurate before the control action is initiated, the system output follows the output of a stable reference model very well, as shown in Fig. 24 of [lo] where the reference input is the sine-shaped signal r(k) = sin(2nki25).

0 100 200 300 400 500 6 sampling steps k

Fig.5 N = 30, h = 0.05

Y

Response of direct neural control system for example 2

__ yd _ ~ _ _

3

-41 0 20 40 60 80 100 120 140 160 180 200

sampling steps k Tracking of setpoint change for direct neural controller for exam- Fig.6

ple 2 N = 5, h = 0.07 -y

Yd _ - - _ e = Y ~ - Y

For the proposed direci neural controller, the system output can also track the sine-shaped signal, illustrated in Fig. 5 where the parameters are chosen as N = 3 0 , ~ = 0.05 and the random initial weights between (-0.1, +O. 1). In addition, the tracking of setpoint change switching from 0 to 4 is shown in Fig. 6 where the number of neurons in hidden layer is N = 5, the learn- ing rate is q = 0.07 and the initial weights are chosen randomly between (- 0.1, + 0.1). In contrast to the scheme of indirect neural controller proposed by Narendra, the direct neural controller has the advan- tages of a much simpler structure and algorithm, the potential of robustness, i ts well as the absence of long offline pretraining.

4.2.1 Effect of number of neurons in the hidden layer N The response of the plant varies with the number of neurons in the hidden layer of the neural controller. Fig. 7 shows the response curves of the system with dif- ferent numbers of neurons in the hidden layer, while the weight initialisation ranges (-0.1, 0.1) and the learning rate q remain unchanged. It is observed that the system performance does not always improve with

The following observations are also made.

665

Page 6: Direct neural control system: nonlinear extension of adaptive control

increasing number of neurons N in the hidden layer. The overshoot decreases and the settling time ts increases as N decreases. There is still no systematic way of choosing the number of the neurons in the hid- den layer. However, in this simulation (Fig. 7), a choice of N between 5 and -10 gives an optimal response.

6

2 - 5

- s 4 L

Q

Q 3 5 5 2 c

6 1 0 0

-1 5 10 15 20 25 30 35 40 45 50

sampling steps k

Fig.7 h = 0.01

Effect of number in hidden neurons N f o r example 2

-1 I I 0 5 10 15 20 25 30 35 40 45 50

sampling steps k Fig.8 N = 20

Effect of learning rate h for example 2

6r 1

-1 1 ’ 0 5 10 15 20 25 30 35 40 45 50

sampling steps k EfSect of initial weights [random in (-0 I , +O I ) ] for example 2 Fig.9

N = 30, h = 0 01

4.2.2 Effect of learning rate q The system behaviour is very sensitive to the learning rate q. As shown in Fig. 8, the system respome is more sluggish for a smaller q. A larger learning rate q will cause the overall system to be unstable. This small learning rate is due to the inclusion of the Jacobean factor J,. The selection of 7 depends on the system to be controlled.

4.2.3 Effect of initial weights of neural controller The initial weights of the direct neural controller are chosen randomly in a small interval around the origin

666

of the parameter space. As shown in Fig. 9, the plant- output responses vary slightly for different sets of the initial weights distributed randomly in (-0.1, +O.l) .

4.2.4 Robustness Fig. 10 shows the robust property of the system for a constant input disturbance 5. First, the plant output is regulated from an initial value -1.2 to the equilibrium state. A constant disturbance 5 = 2 is then applied to the input terminal of the plant at sampling time kl, and is removed at k2, as shown in Fig. 10. It can be seen that the system response returns to its equilibrium state regardless of the disturbance being introduced.

k l k 2 2.0

E 1 . 5 al

x z 1 0

0 . 5 5 - 0 4; - 0 . 5 s 2 - 1 . 2 a -1.0

-1.5 -2 0

L

disturbance n

I .

0 5 10 15 20 25 30 35 40 45 50 sampling steps k

Fig. 10 Robustness for a constant input disturbance (example 2)

5 Conclusions

In this paper, the methodology of conventional MRAC-system design is extended to design a direct neural control system for a class of nonlinear discrete- time system with structural uncertainty. The gradient- descent metbod, which was used in the conventional MRAC system, is still the principal tool for the deriva- tion of learning law of the neural-network parameters. The local stability can be achieved provided that the learning rate q is restricted to be small enough and proper bias conditions are satisfied. For a linear plant only, with parametric uncertainty, the direct neural controller does not perform as well as the model-refer- ence adaptive controller. However, the significant potential of the direct neural controller is its applicabil- ity to nonlinear systems, particularly with structural uncertainty where conventional adaptive controllers fail. In contrast to the scheme of indirect neural con- troller proposed by Narendra, the direct neural con- troller has the advantages of a much simpler structure and algorithm, and the potential of robustness. With- out a specific pretraining stage, the weights of the neu- ral network are adjusted online from random values. Only a few neurons are needed in the hidden layer.

6 References

1 STEIN, G., and ATHANS, M.: ‘The LQG/LTR procedure for multivariable feedback control design’, IEEE Trans., 1987, AC- 32, pp. 105-114

2 ASTROM, K., and WITTERNMARK, B.: ‘Adaptive control’ (Addison-Wesley, 1989)

3 CRAIG, J.S.: ‘Introduction to robotics: mechanics and control’ (Addison-Wesley, 1989)

4 ZHU, H.A., TEO, C.L., HONG, G.S., and POO, A.N.: ‘An enhanced scheme for the model-based control of robot manipula- tors’, Int. J. Control, 1992, 56, (6), pp. 1243-1261

5 HONG, G.S., ZHU, H.A., TEO, C.L., and POO, AN.: ‘Robust control of robotic manipulators with model-based precompensa- tion and SMC postcompensation’, Proc. Instn. Mech. Eng. I: J. Syst. Control Eng., 1993, 207, pp. 97-103

IEE Proc.-Control Theory Appl., Vol. 142, No. 6. November 1995

Page 7: Direct neural control system: nonlinear extension of adaptive control

6 CUI, X.Z., and SHIN, K.G.: ‘Direct control and coordination using neural networks’, IEEE Trans., 1993, SMC-23, (3), pp. 686-697 CHEN, F.C., and KHALIL, H.K.: ‘Adaptive control of nonlin- ear systems using neural networks’, Int. J. Control, 1992, 55, (6),

8 POO, A.N., ANG, M.H., TEO, C.L., and LI, Q.: ‘Performance of a neuro-model-based robot controller: adaptability and noise rejection’, Intelligent Syst. Eng., Autumn 1992, pp. 50-62

9 PASITIS, D., SIDERIS, A., and YAMAMURA, A.A.: ‘A multi- layered neural network controller’, IEEE Control Syst. Mag.,

7

pp. 1299-1317

1988, 8, (2), pp. 4448 10 NARENDRA. K.S.. and PARTHASARATHY, K.: ‘Identifica-

tion and control of dynamical systems using neural networks’, IEEE Trans., 1990, ”-1, ( I ) , pp. 4-27

11 LIU, C.C., and CHEN, F.C.: ‘Adaptive control of nonlinear con- tinuous-time systems using neural networks-general relative degree and MIMO cases’, Znt. J. Control, 1993, 58, (2), pp. 317- 335

12 JIN, L., NIKIFORUK, P.N., and GUPTA, M.M.: ‘Adaptive tracking of SISO nonlinear systems using multilayered neural net- works’, Proceedings of 1991 American Control conference, 1991, pp. 56-60

13 LEVIN, A.U., and NARENDRA, K.S.: ‘Control of nonlinear dynamical systems using neural networks: controllability and sta- bilization’, IEEE Trans., 1993, “-4, (2), pp. 192-206

14 JIN, L., NIKIFORUK, P.N., and GUPTA, M.M.: ‘Direct adap- tive output tracking control using multilayered neural networks’, IEE Proc. D, 1993, 140, (6) , pp. 393-398

1s FUNAHASHI, K.: ‘On the approximate realization of continu- ous mappings by neural networks’, Neural Networks, 1989, 2, pp.

16 HORNIK, K., STINCHCOMBE, M., and WHITE, H.: ‘Multi- layer feedforward networks are universal approximators’, Neural Networks, 1989, 2, pp. 359-366

17 YABUTA. T.. and YAMACLA. T.: ‘Neural network controller

183-192

characteristic with regard to adaptive control’, IEEE Trans., 1992, SMC-22, (l), pp. 170-177

18 KHALID, M.; a n d OMATU, S.: ‘A neural network controller for a temperature control system’, IEEE Control Syst. Mag. June 1992, pp. 58-64

19 LANDAU, Y.D.: ‘Adaptive control-The model reference approach’ (Marcel Dekker, New York, 1979)

20 SASTRY, S.S., and ISIDORI, A.: ‘Adaptive control of lineariza- ble systems’, IEEE Trans., 1989, AC-34, (ll), pp. 1123-1131

21 RUMELHART, D.E., and McCLELLAND, J.L.: ‘Parallel dis- tributed processing’ (MIT I’resdBradford Books, Cambridge, MA, 1986)

22 WIDROW, B., and LEHR, MA.: ‘30 years of adaptive neural networks: perceptron, madaline and backpropagation’, Proc. IEEE, 1990, 78, (9), pp. 1415-1442

IEE Proc.-Control Theory Appl., Vol. 142. No. 6. November 1995 661


Recommended