+ All Categories
Home > Documents > Neural network load-flow

Neural network load-flow

Date post: 20-Sep-2016
Category:
Upload: tt
View: 215 times
Download: 0 times
Share this document with a friend
8
Neural network load-flow T.T. Nguyen, BE, PhD Indexing terms: Neural networks, Lad-pow analysis, NewtoERaphson algorithm Abstract: The paper is devoted to the develop ment of a neural network architecture which implements the Newton-Raphson algorithm for solving the set of nonlinear equations of power- system load-flow analysis. The principal context is that of online network analysis in energy manage- ment systems with particular reference to the optimal power-flow function. The author shows that the complete Newton-Raphson load-Bow formulation maps into an array of two-layer neural networks. The development starts from a formulation for solving as a minimisation problem the lineariscd equation system to which the Newton-Raphson sequence leads at each iter- ation. For that purpose, an objective function in quadratic form is derived. A neural network struc- ture is given which implements the steepest descent method for minimising this objective func- tion. It is shown that the weighting coefIicients of neural networks are formed from element values in the Jacobian matrix of Newton-Raphson load- 5w analysis. When the Jacobian matrix is non- singular, the quadratic objective function derived has a unique and global minimum. A principal feature of the extensive parallel processing c a p ability of the architecture developed is that the computing time of load-flow analysis is independ- ent of the number of nodes in the power network for which analysis is carried out. For a sample section of a power network, and by software simu- lation, the architecture which the paper seeks to report gives solutions which are identical with those from a standard sequential processor load- flow program. List of principal symbols f (4 N = dimension off (x) X 44 E ' = nonlinear equation system = vector of independent variables = Jacobian matrix of f(x) = quadratic objective function to be mini- mid at each Newton-Raphson iter- ation Axp =correction vector at each Newton- Raphson iteration At = time step interval in numerical integra- tion of the dynamical gradient system 0 IEE, 1995 Paper 1484C (B), k t received 22nd February and in revised form loth November 1993 The author is kth the Energy Systems Cenue, The University of Western Australis, Ncdlands Perth, Western Australia 6009 IEE Roc.-Gener. Tr- Dism.b., Vol. 142, No. I, January 1995 n = time step counter in the digital neural network implementing the dynamical gradient system in discrete form = real and imaginary parts, respectively, of power network nodal voltage vector V Re {Z}, Im {I} = real and imaginary parts, respectively, of Dower network nodal current vector eb Ph, Qh I V&l ck, Dk E, G, B Gtk9 Bkk Gkj? Bkj Superscripts t P Z =specified nodal active and reactive powers, respectively, at node k =specified nodal voltage magnitude at generator node k = active- and reactive-power residual functions, respectively, at load node k = residual function related to voltage constraint at generator node k = real and imaginary parts, respectively, of power network nodal admittance matrix = real and imaginary parts, respectively, of the diagonal element of power network nodal admittance matrix for node k = real and imaginary parts, respectively, of the off-diagonal element of power network nodal admittance matrix for nodes k and j = vector or matrix transpose = Newton-Raphson iteration step counter 1 Introduction Of the numerous different application areas of neural net- works that have been developed or proposed previously, those related to numerical analysis appear to be mainly in optimisation in which a scalar objective function is to be minimised [l-41. The neural network architectures proposed for linear and nonlinear optimisation with or without constraints include Hopfield-type networks [l] and layered networks with feedbacks which implement the steepest descent algorithm [2-41. The author expresses his appreciation to the Uni- versity of Western Australia for permission to publish this paper. He wishes to express his thanks to Prof. W.D. Humpage for discussions relating to the developments of this paper. He par- ticularly expresses thanks to Prof. Humpage for his many suggestions and contributions to the preparation of this paper. 51
Transcript
Page 1: Neural network load-flow

Neural network load-flow

T.T. Nguyen, BE, PhD

Indexing terms: Neural networks, Lad-pow analysis, NewtoERaphson algorithm

Abstract: The paper is devoted to the develop ment of a neural network architecture which implements the Newton-Raphson algorithm for solving the set of nonlinear equations of power- system load-flow analysis. The principal context is that of online network analysis in energy manage- ment systems with particular reference to the optimal power-flow function. The author shows that the complete Newton-Raphson load-Bow formulation maps into an array of two-layer neural networks. The development starts from a formulation for solving as a minimisation problem the lineariscd equation system to which the Newton-Raphson sequence leads at each iter- ation. For that purpose, an objective function in quadratic form is derived. A neural network struc- ture is given which implements the steepest descent method for minimising this objective func- tion. It is shown that the weighting coefIicients of neural networks are formed from element values in the Jacobian matrix of Newton-Raphson load- 5 w analysis. When the Jacobian matrix is non- singular, the quadratic objective function derived has a unique and global minimum. A principal feature of the extensive parallel processing c a p ability of the architecture developed is that the computing time of load-flow analysis is independ- ent of the number of nodes in the power network for which analysis is carried out. For a sample section of a power network, and by software simu- lation, the architecture which the paper seeks to report gives solutions which are identical with those from a standard sequential processor load- flow program.

List of principal symbols

f (4 N = dimension off (x) X

44 E'

= nonlinear equation system

= vector of independent variables = Jacobian matrix of f ( x ) = quadratic objective function to be mini-

m i d at each Newton-Raphson iter- ation

A x p =correction vector at each Newton- Raphson iteration

At = time step interval in numerical integra- tion of the dynamical gradient system

0 IEE, 1995 Paper 1484C (B), k t received 22nd February and in revised form loth November 1993 The author is kth the Energy Systems Cenue, The University of Western Australis, Ncdlands Perth, Western Australia 6009

IEE Roc.-Gener. Tr- Dism.b., Vol. 142, No. I , January 1995

n = time step counter in the digital neural network implementing the dynamical gradient system in discrete form

= real and imaginary parts, respectively, of power network nodal voltage vector V

Re {Z}, Im { I } = real and imaginary parts, respectively, of Dower network nodal current vector

e b

Ph, Qh

I V&l

c k , Dk

E ,

G, B

G t k 9 Bkk

G k j ? B k j

Superscripts t P

Z =specified nodal active and reactive

powers, respectively, at node k =specified nodal voltage magnitude at

generator node k = active- and reactive-power residual

functions, respectively, at load node k = residual function related to voltage

constraint at generator node k = real and imaginary parts, respectively,

of power network nodal admittance matrix

= real and imaginary parts, respectively, of the diagonal element of power network nodal admittance matrix for node k

= real and imaginary parts, respectively, of the off-diagonal element of power network nodal admittance matrix for nodes k and j

= vector or matrix transpose = Newton-Raphson iteration step

counter

1 Introduction

Of the numerous different application areas of neural net- works that have been developed or proposed previously, those related to numerical analysis appear to be mainly in optimisation in which a scalar objective function is to be minimised [l-41. The neural network architectures proposed for linear and nonlinear optimisation with or without constraints include Hopfield-type networks [l] and layered networks with feedbacks which implement the steepest descent algorithm [2-41.

The author expresses his appreciation to the Uni- versity of Western Australia for permission to publish this paper. He wishes to express his thanks to Prof. W.D. Humpage for discussions relating to the developments of this paper. He par- ticularly expresses thanks to Prof. Humpage for his many suggestions and contributions to the preparation of this paper.

51

Page 2: Neural network load-flow

The principal aim of the developments in numerical analysis methods which this paper seeks to report is that of deriving a neural network which implements the Newton-Raphson algorithm for solving multivariable nonlinear equation systems. When implemented in hard- ware, the architecture achieves ultrahigh-speed computa- tion. Individual multiplications in all of the matrix-vector product operations which arise are imple- mented in parallel. A principal feature of the architecture is that the computing time overheads in solution are independent of the dimensions or structure of the equa- tion system which is solved.

The main step in being able to exploit the parallel pro- cessing which neural networks offer is that of solving, as a minimination problem, the linearised equation system to which the Newton-Raphson method leads at each iteration. Dynamic neural networks with feedback provide means of implementing this form of uncon- strained function minimisation. The main context of the research reported is that of

load-flow analysis in power systems. Whilst the comput- ing overhesds in analysis are nowadays unlikely to be at issue, there is scope for further advances in online forms of load-flow in energy management systems, especially in the optimal-power-flow (OPF) function which includes security constraints [SI. The comprehensiveness of the OPF formulations that can be implemented remains that which it is feasible to run in the time periods available for it.

Equally, the main 6ndings of the paper in respect of neural network architectures synthesised to solve nonlin- ear equation systems is entirely general and is applicable to any system of nonlinear nondifferential equations that might be encountered in any field. No previously published work of which the author is aware has been devoted to this particular application of neural networks. The potential scope of its application is very wide.

2 Neural network architecture for solving nonlinear equation systems

2.1 Newton-Raphson algorithm A general system of nonlinear algebraic equations of dimension N is denoted here by

f (4 = 0 (1) Ineqn. 1

In the Newton-Raphson method, the solution of eqn. 1 is achieved iteratively using

xp=xp-i - [J(xP-')]-'/(xP-') (4)

or

(5)

(6) &P = XP - x p - 1

and p is the iteration count. In eqn. 5, Axp-') is the Jacobian matrix of the first-

order partial derivatives of the vector function f ( x ) with respect to x evaluated at x P - l .

52

Convergence is achieved when

l.f&ql Q E i = 1,2, ..., N where E is a preset convergence tolerance.

(7)

2.2 Neural network solution The key step in the Newton-Raphson solution sequence is that of solving eqn. 5 to give the correction vector A x p

at each iteration. The main computational task which this involves is that of factorising the Jacobian matrix J(xP-') in solving for A x p .

Neural networks supporting only a limited range of basic processing operations such as multiplication and addition do not lend themselves easily to implementing the algorithms for Jacobian matrix factorisation widely developed for sequential processor systems. Synthesising neural network architectures for this area of numerical analysis requires first an interpretation of the Newton- Raphson procedure in a form which uses only those pro- cessing operations for which the massively parallel structures of neural networks excel.

Starting from this premise, the solution of eqn. 5 is found here in terms of the minimisation of an objective function formed from it. Eqn. 5 is a linearised form of the original nonlinear equation system for sdution. Deriving from it an objective function which is quadratic in form allows the steepest descent algorithm to be used to h d a minimum of the objective function providing that the Jacobian matrix is nonsingular. Dynamical neural net- works with feedbacks provide a very direct means of achieving this objective function minimisation.

In the vector-matrix product operations that are required in the numerical minimisation procedure, all of the individual multiplications involved are carried out in parallel. It is this parallel mode of computing in the key numerical part of the complete load-flow procedure that leads to ultrahigh-speed computation.

The main steps of the research reported here are there- fore those of

(a) forming a quadratic objective function from eqn. 5 (b) using a dynamical neural network with feedback to

find the minimum value of this objective function at each Newton-Raphson iteration.

(c) deriving two-layer neural network structures to implement all of the different matrix-vector products that arise in neural network load-flow analysis.

2.3 Dynamical gradient system From eqn. 5

J ( X p - 1 ) A x p +f(x'-') = 0 (8)

Using EP for the quadratic function to be minimised at Newton iteration step p

EP = [J(x"-') A x p +f(x'-')]'

x [J(xP-') A x p +f(x'-')] (9)

The minimimum value of the function E' in eqn. 9 is zero. At this minimum point Axp is the solution of eqn. 5 and is therefore equal to the correction vector at iteration step p.

As long as the Jacobian matrix is nonsingular, the quadratic form in eqn. 9 must have a global and unique minimum point.

IEE Prw.-Gew. T r a m Distrib., Vol. I42, No. I , January 1995

Page 3: Neural network load-flow

Quadratic function EP expands to E p = [Ax"J'[J(xp- ')]'[J(xP- ')I Axp

+ 2[Axq'[r(X'-')]'f(x'- 1)

+ cf(x'-')]'Lf(x'- I)]

On differentiating eqn. 10 with respect to Axp, gradient of EP with respect to Axp is given by

aE' -- aAxp - 2[J(xP-')]'[J(xP-1)] A x p

+ 2[J(xp- l)]'f(xP- 1) (1 1) Conventional iterative algorithms for minimising a scalar objective function EP have their basis in solving for the necessary condition in which the gradient vector aEp/aAsp is equal to zero at the minimum point. In the steepestdexent algorithm

Axp(n) = Axp@ - 1) - q - A x P ( n - 1) A r p a E p l

In eqn. 12, n identiiies the iteration step of the steepest- descent minimisation procedure. Scalar coefficient q is often referred to as the 'steplength' of the method. On rearranging eqn. 12

Axp(n) - h P ( n - 1) = - q - Axp(n - 1) (13) a d r p aEp I

Starting from an initial condition Axp(0), eqn. 13 provides a means by which the value of AxP(n) is updated progres- sively at each iteration until, at convergence

Eqn. 13 has the form of a difference equation in which n can be interpreted as a discrete-time step counter. On that basis, Axp(n) - Axqn - 1) provides a finite d8erence approximation to the first-order derivative of Axp with respect to time. Interpreting the derivative in this way in the continuous time domain, leads to the following first- order differential equation of the steepest-descent algo- rithm:

In eqn. 15 k is a negative scalar constant. An outline is shown in Fig. 1 of a dynamic neural network based on

neural network dEP - k-

dAxP - I neural network

I . dEP -

Fig. 1 Continuous time fmn of steepest-descent ntinim'sation

feedbacks which solves the differential equation system of eqn. 15. The evaluation of the gradient vector aEP/aAxP of eqn. 11 is achieved by a neural network.

The output state of the neutral network settles to an equilibrium point which corresponds to the unique minimum of the quadratic function EP. The output vector of the network is then the correction vector A x p . We now derive a neural network structure which implements the dynamic gradient system described by the vector differen- tial eqn. 15.

IEE Prof.-Gener. Ti- Dishib., Vol. 14.2, No. I , Jonuary 1995

2.4 Matrix-vector products in neural networks Each of the several matrix-vector products involved in forming the gradient vector aBP/aAxP is implemented in two-layer feed-forward neural networks in which the pro- cessing function for each node is a linear one.

The network in Fig. 2 forms y1 and y, in the output layer from inputs x1 and xt where

[;:I = [::: :::E:] The structure of Fig. 2 extends to products involving any number of variables and a coefficient matrix of any given

input wtput layer

X I

J22 x 2 Y 2

Fig. 2 Two-layer m a l network

structure. The weights on the horizontal crossconnections between input layer nodes and corresponding output layer nodes are given by the diagonal elements of the matrix by which the input vector is multiplied. The weights of the crossconnections between input and output layer nodes correspond in value to the off- diagonal elements of the matrix. The pattern of cross- connections in the network derives directly from the nonzero off-diagonal element pattern in the matrix.

The combination of networks required to form the gradient vector aEp/aAxp is best derived by breaking down the task into three distinct parts. Working from the extreme right-hand-side of eqn. 11, the first matrix-vector product term to be evaluated is [J(xP- ')]'f(xP-'). Inputs to nodes in the input layer of a two-layer neutral network of the form in Fig. 3 are elements of vectorf(xp-'). The

neural network N3

neural network NI neural network N2 I

I I

Fig. 3 A x p

weights of the horizontal connections between input and output layer nodes are given by the diagonal element values of [J(xp- 71'.

The weights of the crossconnections between input and output nodes are given by the off-diagonal element values of this matrix. The pronounced sparsity of the Jacobian matrix in Newton-Raphson load-flow analysis is reflected in the sparsity pattern of the crossconnections between the input and output layers.

We now turn to the first term in the expression for the gradient vector in eqn. 11, that of [J(xp- ')]'[J(xP- ')I Axp. This product is evaluated using two two-layer feed- forward neural networks in combination. Starting from

53

Neural network fbrJinding NewtowRaphron correction vector

Page 4: Neural network load-flow

the right-hand-side of the product sequence, vector [J(xP-')] A x p is formed first. The input vector to the neural network which implements this product is Axp. The weighting coefticients for the interconnections between the input and output layer nodes of the network are given by the diagonal and off-diagonal element values of Jacobian matrix l(xp-'), It is convenient to identify this neural network by N1.

The output from network N1 forms the input to a further two-layer feed-forward network which we can now rder to as network N2 Weighting coefficients for interconnections in N2 are given by the diagonal and off- diagonal element values of matrix [J(xp- ')Ir. Networks N1 and N2 in cascade therefore implement the sequence

In Fig. 3, the network which implements the product [J(xP-I)]'f(xp-') is denoted by N3. Summating the outputs of N2 and N3 and scaling by two gives the gradient vector of eqn. 11.

Finally, integrating the gradient vector multiplied by k gives the correction vector Axp. The closed loop in Fig. 3 formed by feeding Axp back into the input layer of neural network N1 settles to a value for which the quadratic function Ep is zero.

Each neural network N1, N2, and N3 in Fig. 3 has N input nodes and N output nodes where N is the number of unknowns in the equation system to be solved. The general structure of each of these neural networks is given in Fig. 4. Fig. 5 gives the digital or discrete time form of the array of neural networks of Fig. 3. D is a shift operator which advances the solution from one Newton iteration step to the next.

cJ(xp- ')]'[J(x'- ')I A x p .

input laver output laver 1

2

3 I I I I I I I I I I I N

Fig. 4

Fig. S

54

N.1

N.2

N.3

2N

General structure of two-layer neural network

N3 .

Using Axp for the converged value of the correction vector as found from the steepestdescent minimisation, the solution vector at Newton iteration p is found from

xp = xp-1 + A x p (17) The time delay operator in the steepestdescent mini- misation is denoted by z-' and the time step identifier by n. As in eqn. 13, the correction vector at the nth steepest- descent step is formed in Fig. 5 from

aE' Axp(n) = k P ( n - 1) + kAt -

a A x p

Time step n is incremented in successive passes of the principal loop in Fig. 5 until a convergence criterion is fulfilled. Convergence of the loop in the discrete form of Fig. 5 corresponds to the dynamic feedback loop in the continuous time system of Fig. 3 settling to a steady solu- tion.

Training as in supervised learning does not arise. Neural network weighting coefficients derived from Jaco- bian matrix elements are updated at each Newton- Raphson iteration.

3 Neural network load-flow analysis

3.1 Separation of tasks Mapping all of the steps involved in the Newton- Raphson load-flow formulation into an ensemble of neural networks involves the following different tasks:

(a) nodal current evaluations (b) residual function evaluations (c) Jacobian matrix element evaluations (d) forming updated values for voltage variables at

each iteration. Newton-Raphson load-flow analysis in the rectangular co-ordinate form is adopted in the present work.

32 Nodal current evaluations If Y is the nodal admittance matrix of the power network for which a load-flow solution is to be found, the relationship between nodal current vector Z and nodal voltage vector Vis given in

z = YV (19) The network nodal admittance matrix separates into its real and imaginary parts so that

Y = G + j B (20)

N I N2 I

Numerical fonn c f m a a l network architecture for NewtoFRaphwn algorithm

IEE Proc.-Gew. T r a m Distrib., Vol. 142, No. 1, January 199s

Page 5: Neural network load-flow

Correspondingly for the vector of nodal voltages

V = a + j b (21) With these definitions, the real and imaginary parts of the nodal current vector Z are given in Re { I } = Ga - Bb (22)

(23) Im {Z} = &I + Gb

Neural networks for evaluating Re {Z} and Im { I } at iter- ation step p - 1 are shown in Fig. 6. The structure of

NR 1

I r "

Re ( I )

NR2

B a

NR3

G b

Fig. 6

neural networks NR1, NR2, NR3 and NR4 is given in the general two-layer form in Fig. 4. The weighting coefi- cients for interconnections in these neural networks are determined by the nodal conductance matrix G and nodal susceptase matrix B of the power network.

For a sparse power network where there are few branches connected to each power network node, the structure of neural networks NR1, NR2, NR3 and NR4 is also a sparse one. Them are few connections to each neural network node.

Neural network for nodal current ewluations

real and imaginary parts, respectively, of the nodal current at this node at iteration step p - 1.

For each load node k, the evaluations of Cf-' and 4-l in eqns. 24 and 25 are achieved by a two-layer feed- forward neural network in Fig. 7a.

cP-l

EP-' k

Fig. 7 Neural networks for residdfunctwn evaluations

For a generator node k with specified active-power generation Pt. and terminal voltage magnitude I V, I, the active-power residual function is given in eqn. 24 as that for a load node.

The second residual function related to voltage con- straint is given in

Residual functions Cf-' and El-' are evaluated by the neural network of Fig 7b.

Neural networks in Fig. 7 for evaluating residual func- tions are feed-forward two-layer neural networks of small

fundons C, at iteration step - are s k . For each power network node except the slack node, the input layer has four nodes and the output layer has two nodes.

Overall, the number of neural networks with the struc- ture given in Fig. 7 is equal to the number of power network nodes excluding the slack node.

3.4 Jacobian matrix elemnt evaluations Differentiating the residual functionS in eqns 24 and 25 with respect to at-' and bf-', gives the Jacobian diag- onal submatrix for a load node k as in

3.3 Residuid function evaluations F~ a load node k =tive and redve powers pt. and ~ t . the nodal adve and reactive power

given by

C1-I = uf-' Re {If-'} + bf-' Im {If-'} - Pt. e-' = bf-' Re {If-'} - uf-' Im {If-') - Qb

(24)

(27) 1 I Im {If-'} - B,g-' + G,bf-' I Re {If-'} -E,,&-' - G,uf-'

IEE Proc.-Gener. Transm. Dirtrib., Vol. 14.2, No. 1, January 1995 55

Page 6: Neural network load-flow

Elements of the Jacobian submatrix in eqn. 27 are calcu- lated using the neural network of Fig. 8a. In eqn. 27, C& and E , are the real and imaginary parts, respectively, of the diagonal element of the power network nodal admit- tance matrix for node k.

Following the differentiation of the residual functions

Jacobian diagonal submatrix for a generator node k. is given in

in eqns. 24 and 26 with respect to ai-' and @-', the

Fig. 6 nodes

N n a d networks for evaluating Jacobian submatrices for load

matrix elements have been formed in the neural network of Figs. 7-9, the array of neural networks in

digital form of Fig. 5 updates the vector of voltage vari- abies at iteration step p .

Differentiating the residual functions in eqns. 24 and 25 with respect to CZ-' and bp-' (real and imaginary parts of voltage at node j where j # k) gives the Jacobian offdiagonal submatrix for a load node k in

C,, and Bk, are the real and imaginary parts, respectively, of the off-diagonal element between nodes k and j of the power network nodal admittance matrix. Elements of the Jacobian submatrix in wn. 28 are evaluated bv the

Elements in the Jacobian submatrix in eqn. 29 are evalu- ated by the neural network in Fig. 9a.

b

Fig. 9 ator nodes

Neural networks for waluming Jacobian submatrices for gener-

Differentiating the residual functions in eqns. 24 and 26 with respect to a!-' and b,P-' (j # k), the Jacobian off-diagonal submatrix for a generator node k is given in

The nonzero elements of the Jacobian submatrix in eqn. 30 are evaluated by the neural network in Fig. 9b.

For evaluating Jacobian matrix elements, each of the neural networks in Figs. 8 and 9 is a feed-forward two- layer neural network of small size. The maximum number of nodes in each is eight. The total number of these small-sized neural networks overall is equal to that of nonzero Jacobian s u b m a t d . The evaluation scheme based on neural networks in Figs. 8 and 9 takes full advantage of the sparsity property of the Jacobian matrix structure.

3.5 Forming updated values for voltage variables at each iteration

Page 7: Neural network load-flow

If M is the number of power network nodes, there are 2(M - 1) nodes in both the input layer and output layer of each of the neural networks N1, N2 and N3 in Fig. 5. x in Fig. 5 is the vector of real and imaginary parts of power network nodal voltages. The interconnections in these neural networks are determined directly from the power network configuration.

The overall neural network structure for load-flow analysis is shown in Fig. 10. The global feedback loop relates to the Newton-Raphson algorithm in which the

Through the global feedback loop of Fig. 10, the overall load-flow solution is advanced through successive Newton-Raphson iterations to convergence. Checking the convergence of this loop is based on

(32) l j & r P - l ) l Q E i = 1,2,. . ., N Tolerance E was set to in the validation studies. The same tolerance value was used in the standard Newton- Raphson load-flow program. With these analysis con- stants and convergence values, the overall

X O XP-1 global feedback loop

NAI G d nodo1 current N3

evoluotions xo - evoluotions

ps --L

05 d "5 -

Re(lP-') Im ( I p") weighting , coefficients

weighting coefficients

- p-p-1 xo x P-1

weighling - *xp coeff icient5

weighling coefficients for neural networks

NI, N2. N3

Fig. 10 Neural nehvark load-pow

step counter p is incremented at each iteration. The local feedback loop is that of the steepest-descent minimisation of quadratic function EC at each Newton iteration.

Once nodal currents have been evaluated by the neural network NA1, the residual function evaluations by neural network NA2 and Jacobian matrix element evalu- ations by neural network NA3 can be carried out inde- pendently of each other as in Figs. 7-9 and in the overall structure of Fig. 10. All of the main tasks of the complete Newton-Raphson procedure are implemented in parallel.

4 Validation studies

For the purposes of verifying the validity and correctness of the architecture in Fig. IO, results obtained by software simulation for a sample power network system have been compared with those from an established Newton- Raphson load-flow program. In the validation studies, the scalar constant k in the dynamic gradient of eqn. 15 was set to - 1ooO.O. The time step length in the steepest- descent algorithm (At in eqn. 18) was 10 ps.

In the local feedback loop in which the quadratic func- tion EP is minimised at each Newton iteration, checking for convergence in the loop is based on the criterion

A value of lo-' has been used for E,. The outputs from neural networks N2 and N3 in Fig. 10 are summated and scaled by 2kAt to give d(Axi An)).

Id(Axj'(n))lQe, i = 1,2, ..., N (31)

IEE Roc.-Gener. Tr- fistrib., Vol. 142, No. I , January 1995

Newton-Raphson iteration loop in the neural architec- ture of Fig. 10 converged to give a solution identical to that from standard load-flow analysis.

in eqn. 31 set to loT5, the local minimisation loop of Fig. 10 typically requires 10-20 iter- ations to achieve convergence. The precise number of iterations depends on the iteration step in the overall solution. The overall product kAt in Fig. 10 is a choice in determining the solution trajectory. In the present work, the choice of At = lo-' s and k = - lo00 in the val- idation studies leads to the product kAt of value -0.01. For investigation purposes, neural network solutions for kAt = -0.001 and for kAt = -0.02 have also been sought. Converged solutions were achieved in each case.

The gain setting kAt has implications in the number of time steps required in achieving the minimisation of the quadratic function EP. In the architecture of Fig. 10, the time step length At and the scalar coefficient k are rep- resented as a block with constant gain 2kAt. There is no inherent requirement for neural networks N1, N2 and N3 to complete their calculations in the step interval At.

It is interesting to quantify the total computation time of the neural system of Fig. 10. For this purpose, NP is used for the total number of Newton-Raphson iterations and tNAI , t,,, tNAa, tN1, t,, and tN3 are used for the computation times in solution in neural networks NA1, NA2, NA3, N1, N2 and N3, respectively.

If NC@) is the number of iterations in the local loop which minimises quadratic function E p at each Newton

With the tolerance

51

I I

Page 8: Neural network load-flow

step and T is the total computing time of the complete architecture of Fig. 10

= N P ( t N A I + max ( t N A 2 9 t N A 3 ) + TN3)

N P

+ 1 N C b X t N I + t N 2 )

( t N A 2 > t N A 3 ) = tNAZ if t N A Z > tNA3 (34)

( t N A Z 9 r N A S ) = t N . 4 3 if t N A Z < t N . 4 3 (35)

(33) p = 1

In eqn. 31

and

5 Conclusion

Using the procedures developed here, power system Newton-Raphson load-flow analysis maps into an array of neural networks of general structure. Network sparsity is preserved completely. In each of the neural networks in the complete Newton-Raphson architecture, the pattern of crossconnections between input-layer and output-layer nodes is that of the sparse nodal admittance and Jaco- bian matrices of the Newton-Raphson formulation.

In terms of computing time overheads, the computing time of neural network load-flow analysis is independent of the number of nodes in the network analysed. The complete neural network architecture is robust. Its reli- ability in achieving a converged solution is the same as that of established programs running on sequential lMChineS.

For hardware requirements, as they relate to the total number of internal co~ections, the architecture devel- oped here preserves in its structure the pronounced sparsity of the power networks for which load-flow analysis is to be carried out. The ratio of the total number of neural network C O M ~ C ~ ~ O U S to the total number of nodes is unlikely to exceed seven or eight. Several Werent neural network chips are now available mainly in forms intended for particular areas of applica- tion More general systems now at an advanced stage of development have recently been reported [6].

The principal context of the resepcb reported here is that of real time control in integrated power network systems with particular reference to the optimal power flow (OPF) funclion. When security constraints are included, OPF requires online solutions of very many large sets of load-flow equations. It is here where the

ultrahigh-speed of massively parallel computing in neural networks can offer major prpctical benefit. Very much more work is required to map the enthe OPF function to neural network architectures.

One particular aspect of the architecture developed here, which is at present the subjcct of continuing invest- igations, is that of the choice of analysis parameters k and Ar. The product should be small so that a numeric implementation of the dynamic gradient system approaches the continuous time form. On the other hand, reducing kAt increases the number of local iterations required in minimising objective function EP at each Newton iteration. Discrete system analysis shows that the number of iterations depends on the largest eigenvalue of the product [J(xP-')]'[J(xp-')]. The larger the numerical value, the more iterations required. The dependence of the number of iterations on the size. of the system is an indirect one.

In the present work, kAt is constant throughout solu- tion. Under investigation at the present time is a control strategy by which kAt is found at each Newton iteration as that for which the number of local iterations is reduced to a minimum.

Whilst the research which the present paper seeks to report has been devoted to the particular area of load- flow analysis in power systems, some of its findings have much wider implications. They are applicable to use in solving nonlinear equation systems encountered in any field. The neural network method of solution developed here is a general one for solving nonlinear equation systems of any form, dimensions and structure.

6 References

1 TANK, D.W., and HOPFIELD, JJ.: 'simple "neural" opthisation networks: an A/D converter, a i p d decision Circuit, and a Linear pro- gramming circuit', ZEEE Truns.., 1986. CASU, (5), pp. 533-541

2 KENNEDY, M.P., and CHWA, L.O.: ' N e d networks for non- linear programming', ZEEE Trona., 1988, CAS-35, (5), pp. 55&562

3RODRIGUEZVAZQUE2, A., DOMINGUEZCASIIIO, R., RUEDA, A., HLERTAS, J.L., and SANCHEZ.-SIwBNcIO, E: 'Non-lincar switched capacitor neural networks for 0ptimh.tiOn pmblcma', ZEEE Trona., 1990, CASSI, (3), pp. 384-398

4 MAA, C.Y, and SHANBLATT, MA.: 'Linear and qnndratjc. pro-

5 STOTT. B, ALSAC, O., and MONTICELLI, AJ.: 'Security annlysis

6 WATANABE, T., KIMURA, K, AOKL M, SAKATA, T., and KO, K.: 'A single 1.5-V digital chip for a IO' aynajm naval n e t w d . ZEEE Trona., 1993, "-4, (3). pp. 381-393

m g m D&W& mnlysis: ZEEE T m m 1992, "-3, (4), pp. 580-594

and Optimisation, &W. IEEE. 1987,75, (lZ), pp. 1623-1644

58 ZEE Pm-Gener. Tr- LWrib., Vol. 142, No. Z, January 1995


Recommended