Earth-return path impedances of underground cables Part 2 Evaluations using neural networks
TT Nguyen
Indexing tertns Underground cables Impedunces Neurul networks
Abstract The aim of the paper is to train an array of neural networks to provide a procedure with high accuracy and low computing time requirement for evaluating the earth-return path impedances of underground cables The training set required is formed by using a direct numerical integration method to evaluate the infinite integrals in the earth-return path formulations The training is extensive but once completed it is valid for any cable configuration as it arises The training leads to a universal set of weighting coefficients for an array of networks each of which has three input nodes one output node and two hidden layers Set up with these coefficients the computing time requirements in calculating sets of series-path parameters for underground cables is reduced by a factor of about 1000 from that using a direct numerical evaluation of infinite integrals
List of principal symbols
co angular frequency pe earth resistivity h free-space permeability (4x x Hm)
dJk distance between the conductor of phase cable j and that of phase cable k
dJk distance between the conductor of phase cable j and the image of the conductor of phase cable k
ItJ hk depth below the earth plane of cables j and k respectively
horizontal separation between the conductors of cable j and of cable k weighting coefficient of the connection between node i and node j of a neural network
a W h I P e
hin (hj + hkIi2
xlk
wy
1 Introduction
For some time now there has been active research interest in deriving simplified closed-form approxima- tions for the infinite integrals which arise in the earth- 0 IEE 1998 IEE Proceedings online no I9982354 Paper received 19th January 1998 The author is with the Energy Systems Centre Department of Electrical and Electronic Engineering The University of Western Australia Ned- lands Perth WA 6907 Australia
IEE Proc-Gener Trunsni Distrib Vol 145 No 6 Noveinher 1998
return path impedances of underground cables [ 1-41 The aim of this previous work has been to lower the computing-time overheads of cable parameter calcula- tions in comparison with those when the infinite inte- grals in them are evaluated by direct numerical methods [5]
All of the approximate methods so far proposed have their limitations The danger in using them is always that of introducing errors without corresponding means of quantifying their extent For the parameter sets of underground cables on which extensive analyses might be based parameter values on which reliance can be placed is an irreducible requirement of all analysis and simulation practice To introduce uncertainty in the parameters puts at risk the foundations of analysis and system studies
Against this background the present paper aims to contribute to research on closed-form approximations for earth-return path impedances It addresses ques- tions of accuracy and computing time requirements in infinite integral evaluations The new method of func- tion approximation which the paper seeks to report draws on the powerful nonlinear mapping capabilities of neural networks [6 71 The objective here is to syn- thesise neural networks to evaluate the infinite integrals in the expressions for underground cable earth-return path impedances
Training data for neural network synthesis cover a range of frequencies earth resistivity and cable config- urations that are encountered in practice It is derived from the rigorous solution of infinite integrals by direct numerical integration [5] It is absolutely essential that the training data is of the highest possible accuracy Only the comprehensive solution procedure developed in [5] can fulfil this training data requirement
Neural network training in this application is exten- sive In the authors work there are about 20000 train- ing cases Providing the data for these requires a total of about 72h of computing time on a SUN SPARC 4 workstation for which the clock frequency is 110MHz Using this data the training process itself requires about 720h of computing time The result of training is that of the topology of an array of neural networks together with their weighting coefficients which accu- rately represent the nonlinear relationships in the infi- nite integrals encountered and involve earth resistivity cable configuration and the frequency for which cable parameter values are to be found
Once synthesised the neural network array devel- oped here is applicable to any underground cable sec- tion as it arises Training is only needed once The set of weighting coefficients that it gives is available on the Internet site of the Energy Systems Centre at the Uni-
621
versity of Western Australia at httpllwwweeuwaed- uaui-escl Given this availability advantage can be taken of the developments of this paper without the need for the extensive neural network training that is involved
Comprehensive comparisons reported in the paper confirm that the consistently high accuracy of the new method over the wide range of frequencies likely to arise in practice is achieved with low computing time overheads On a SUN SPARC 4 workstation the com- puting time for a representative evaluation involving 600 frequency points as a part of cable calculations is about 6s in comparison with about 2h when the infi- nite integrals involved are evaluated directly by numer- ical integration
No previously published work of which the author is aware has referred to the developments to which the present paper is devoted
2 Previous research
The infinite integral encountered in the expression for earth-return path mutual impedance associated with cable j and cable k is [ 1 51
x exp ( - 2 h amp d m ) cos ( x j k h u ) d u
(1) In eqn 1
X j k = horizontal distance between cable j and cable k hj + h k h ___
2 where hj and hk are the depths below the earth plane of cables j and k respectively and
In eqn 3 w is the angular frequency pr is the earth resistivity and
For self impedance hj = hk and xjk is the outer radius of the cable
Attempts have been made to expand the infinite inte- gral JG k) in eqn 1 in terms of an infinite series [l] However rapid convergence of the series is confined to the low frequency range only For a limited range of frequencies earth resistivity and cable separations the following closed-form function has been derived from the series expansion to approximate the earth-return path impedance [l]
= 416 x 10-7Hm
where y is Eulers constant Due to its simplicity the approximation in eqn 4 is widely used
Recently another closed-form approximation has been derived for the earth-return path impedance of underground cables [2 ] Errors encountered in this approximation increase with frequency particularly for frequencies greater than about 1OkHz
There has also been a proposal to use the concept of complex depth [3 4 81 Drawing on this concept an expression for earth-return path self impedance has
628
been derived but that for the mutual impedance has not yet been established [3 41
As the infinite series derived by Carson [9] to repre- sent the infinite integral in the overhead line earth- return path impedance converges rapidly there has even been a proposal [ 3 ] to approximate the integral for underground cable in eqn 1 by that derived by Carson for overhead lines With this approximation the rapidly convergent infinite series for overhead lines are used to evaluate the underground cable earth- return path impedances However this approach leads to significant error as the frequency increases
All of the previous methods have led to simplifica- tions in the evaluations of the infinite integrals with low computing time requirements However the meth- ods can lead to significant errors particularly in the high frequency range
This paper proposes an entirely new method for eval- uating underground cable earth-return path imped- ances which has a very high degree of accuracy and low computing time requirements Earth-return path impedance evaluations are achieved by neural network simulations in software
3 networks
Nonlinear function representation by neural
On the basis of eqn 1 the integral JG k) is a nonlinear function of hda and xJI(dcL If M is this nonlinear function then
J ( j k ) = Al (hnh Xjkamp) (5) A multilayer feedforward neural network (MFNN) has the powerful property that through training it can represent any nonlinear function of any degree of com- plexity [6 71 Based on this nonlinear mapping prop- erty it is proposed to use MFNNs to represent the nonlinear function M(hda xikda) in eqn 5 This is to be achieved by training the MFNNs After training the MFNNs are used to evaluate the infinite integral JG k ) for any particular underground cable
increasina level
input layer first hidden second hidden output layer (lowest level) layer layer (highest level)
Fig 1 0 node (neuron)
Multilayer jeeciforwarri neuml network
4 Neural networks
A typical feedforward layered neural network is shown in Fig 1 Including the input and output layers there are four layers in this example Each layer can have any number of nodes (neurons) The network has a hierarchical structure The layer at the highest level is the output layer That at the lowest level is the input layer Layers which are internal to the network are referred to as internal or hidden layers The level of layers increases from the lowest level of the input layer to the highest level of the output layer
IEE Proc-Giver Tiuniin Dutrib Vol 145 No 6 Noieinhei 1998
As in Fig 2 the interconnection between two nodes j and i is a unidirectional one The interconnection from node j to node i carries the implication that the output from node j multiplied by a weighting coefficient wji is input to node i In a feedforward network only out- puts from nodes at a lower layer can be connected to nodes at a higher level
j I
a F 0 Wi
Fig2 Interconnection from node j to node i w weighting coefficient associated with interconnection from node j to node i
output of node i
Fig3 wjr weighting coefficient associated with interconnection from node j to node i
Inputs and output for node i
We now give the input and output relationships for any node i which is not in the input layer Each node i has a number of input connections to it as in Fig 3 If yj is the output of node j connected to node i and wji is the weight associated with that connection then the net input ui for node i is formed from the weighted sum
J
The summation in eqn 6 extends over all nodesjs that have input connections to node i The output y of node i is a nonlinear function of its net input U
(7) In eqn 7 denotes a nonlinear function and Oi is the threshold associated with node i A commonly used nonlinear continuous function for a node is the sigmoid function For node i
n
The numerical value of J in eqn 8 is in the range -1 - 1 The output yi of node i can then be connected to other nodes in the layers at a higher level than that for node i When node i is in the input layer its output is the same as the input The role of nodes in the input layer is that of connecting specified inputs to other nodes in the higher layers Fig 1 shows that each input node is connected to every node in the first hidden layer In general input nodes can be connected to any other node in a higher layer
In a feedforward network when its weights and thresholds are known values for outputs in the output layer can be evaluated once inputs are specified at the input layer Weights and thresholds are the parameters of the neural network
For a given neural network structure and its parame- ters the output ykp of node k in the output layer is a
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998
nonlinear function of the inputs to the input layer of the network We denote by xp the vector of network inputs The dimension of this vector sets the number of nodes in the input layer We use for the nonlinear rela- tionship between output ykp and vector xp
The mapping function gk derives from the inputoutput relationship for each node as in eqn 8 the weightings of connections of the network and the network struc- ture expressed in terms of the number of layers the number of nodes in them and the interconnections between nodes
Eqn 9 relates to one node in the output layer We now extend this expression We use yp for the vector of the outputs for all nodes in the output layer so that it is a vector of outputs for the complete network Retain- ing xp for the vector of inputs we use for the overall mapping which the network provides
Beginning with a selected network structure the step of synthesising a neural network to meet the requirements of a particular application involves finding the weights of all the connections and the thresholds for all the nodes (ei in eqn 8) so that the mapping which the net- work provides in eqn 10 matches that of the applica- tion for which the network is intended This is referred to as the training phase
In the actual system to which the neural network is to be applied there is a known vector of outputs for a given vector of inputs The training set is a collection of inputoutput vector pairs on which the training is to be based The training set should be large enough to ensure that the network when trained closely matches the requirements of its application
The weightings of the connections between nodes together with the threshold values for all nodes are found by first forming an error between outputs speci- fied from the application for given inputs and those given by the neural network in response to the same inputs
As in eqn 9 yk denotes the output of node k in the output layer as given by the response of the neural net- work to a given input vector We denote by dkp the out- put specified for the node from the requirements of the application The error for this node is then dk - ykp Using a quadratic form for minimisation purposes leads to the error function for all nodes in the output layer
Using eqn 11 we can form Ep for each and every case in the training set The output yk is formed for each case from the neural network response Output dkp is specified for each case from the application in which the network is to be used A total error function for all cases in the training set is then formed from
Er = C E p (12) P
In the present work the total error function E is minimised using the quasi-Newton method [lo] This appears to offer advantage in terms of the rate at which the minimisation sequence converges in compari- son with that of the standard backpropagation training algorithm based on the first-order gradient descent
629
method [l 11 The gradient evaluations which the quasi- Newton method requires are summarised in the Appen- dix (Section 1 11)
The threshold Oi associated with node i can be repre- sented equivalently by a bias node with constant input equal to 10 and the connection from the bias node to node i This approach is adopted in the present work The training then involves the finding of weighting coefficients only including those from the bias node to other nodes
By progressively increasing the number of hidden layers in a feedforward neural network and the number of nodes in them the complexity of mapping of finite dimension that can be achieved is progressively expanded [6 71
5 path impedances
The first stage in synthesising neural networks to repre- sent the nonlinear function M(hdu xjkdu) in eqn 5 is to form a training set The inputs to the neural net- works are hdu and xjkdu and the output is JG k) The integral in eqn 1 is integrated numerically for a wide range of hdu and xjkdu to form the training set using the procedure in [5] The range of hldu in the evalua- tion is from 0 to 6 From numerical evaluation it has been confirmed that JG k ) tends to zero when hdu is greater than about 60 irrespective of xjkdu Therefore there is no need to evaluate JG k ) for h77lu beyond 60 For each value of hdu the integral is evaluated for a range of xjkdu from 0 to 72 corresponding to the ratio xjklh (ie ratio of horizontal separation to mean depth of two-phase cables) in the range 0 - 12 There are 18 662 training cases in total These training cases need only be formed once
In principle a single neural network with two inputs hdu and xjkdu can be trained to represent the com- plex function M expressed in terms of its real and imaginary parts The outputs of the neural network are then the real and imaginary parts of M The neural net- work is a two-input two-output system
Alternatively the nonlinear function M can be repre- sented by two separate neural networks The first neu- ral network is trained to represent the real part of M and the second one the imaginary part of M There are then two two-input single-output neural networks Each neural network receives hdu and xikdu as inputs
In the present work the second approach is adopted in which separate neural networks represent the real and imaginary parts of function M Many experiments have been carried out which have confirmed that this approach gives better convergence in training
By experiment it has been found that by separating into two ranges of hdu better convergence in training was achieved One set of neural networks represents the function M for hdu in the range 0 - 2 and the xjkhJl ratio in the range 0 - 12 Another set of neural net- works represents the function M for hldu in the range 2 - 6 and xjkhIn ratio in the range 0 - 12 There are four neural networks (i) Neural network RL This neural network represents the real part of function M in the range 0 I hdu I 2 and I xjkh I 12 (ii) Neural network RH This neural network represents the real part of function M in the range 2 s hzu I 6 and 0 I xjklh I 12
Neural networks for evaluating earth-return
630
(iii) Neural network ZL This neural network represents the imaginary part of function M in the range 0 I hdu I 2 and 0 I xJkhn I 12 (iv) Neural network IH This neural network represents the imaginary part of function M in the range 2 I hlu I 6 and 0 I xlkhnl I 12 When hdu gt 6 the integral JG k ) is set to zero irre- spective of x du
Prior to training hdu and xJkdu are scaled down by a factor of 300 so that the inputs to neural networks have values less than 1 Since the output is well inside the range -1 - 1 there is no need for output scaling
The second stage is to determine the configuration of the neural networks the number of layers and the number of nodes in each layer The neural network structure is that of MFNN of the form in Fig 1 Including the bias node with a constant input of 10 there are three nodes in the input layer of each neural network The output layer of each neural network has one node The number of hidden layers and number of nodes in each hidden layer are determined by succes- sive trials and corrections to achieve low errors both in training and subsequent testing
After training and testing many MFNNs with differ- ent numbers of hidden layers and different numbers of nodes in each hidden layer the final configuration that has the minimum number of hidden layers and the minimum number of nodes in each hidden layer which achieve both low training errors and low test errors is that shown in Fig 4 Each of the four neural networks RL RH IL and IH in (i)-(iv) has the configuration of Fig 4 However their weighting coefficients are differ- ent
In total the neural network structure in Fig 4 has 36 nodes Including the bias node the input layer has three nodes The output layer has one node The first hidden layer has 20 nodes and the second hidden layer has 12 nodes The bias node has connections to all hid- den nodes and to the output node Each neural net- work has 325 weighting coefficients
The inputs x1 and x2 are given as
Jk
The scaling factor s in eqns 13 and 14 is set to 300 This ensures that the neural network inputs are well within the range [-1 11
6 Representative results
For the purpose of evaluations a three-phase 150kV cable [12] is used It is at a depth below ground of lm Individual phase cables are in horizontal formation and have a separation of 035m The remaining cable data [12] is summarised in the Appendix (Section 112)
Table 1 gives the earth-return path self impedances of each phase cable for a range of frequency from 50Hz to lMHz and earth resistivity in the range 1 ~
l00Qm The self impedances are first calculated using neural networks which have been trained in Section 5 They are then compared with the impedances evaluated using numerical integration
Table 2 gives the earth-return path mutual imped- ances between two adjacent phase cables The results
IEE Ptoc -Gener Trcinsm Drstrrh Vol 145 No 6 Noveinber 1998
output
Fig 4 bias node 3 has connections to all nodes in hidden layers and to output node x = ha30 x2 = ~~Uni30
Structure of neural networks for evuluuting the iizJinite integrul
Table 1 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 1 o5 106
1 o3 105 106
1 o3 1 o5 106
50
50
Earth resistivity Qm Neural network Numerical integration
Earth-return path self impedance Bm
1 1 1 1 20 20 20 20 100 100 100 100
0504 IO-^ + jo482 IO-^ 0107 x
0112 +jO429 0966 + j2816 0494 x +jO575 x
0101 x IO- + j0965 x
01 13 + j0656 1166 +j4815 0490 x
0993 x
0107 + j0768 1158 + i6050
+ j0768 x IO-
+ j0628 x IOrdquo + j0107 x IO-rsquo
0504 x + j0482 x
0107 x + j0768 x
0112 + j0429 0966 + j2816 0497 x +jO577 x
0101 x +jO964 x
01 13 + j0656 1166 +j4815 0501 x + j0628 x
0996 x
0107 + j0768 1158 + i6050
+ jO107 x IO-1
from neural networks are also compared with those from numerical integration
In both Tables 1 and 2 neural network solutions are almost identical with numerical integration results However the neural network procedure is about 1000 times faster than evaluations based on direct numerical integration
7 Incorporation into cable parameter calculations
To incorporate the developments of this paper into the existing software for practical underground cable
parameter set evaluations requires software which implements the neural network functions as in eqns 6- 8 Outputs from the neural networks are transferred into the earth-return path section of the composite soft- ware system which provides cable parameter calcula- tions
Starting from existing software for cable parameter calculations one option is to embed the neural network software module in it Alternatively the neural net- work software can be seen as an additional module providing a critical part of overall parameter calcula- tions
IEE Pix-Gener Trunsm Distrib Vol 145 No 6 November I998 63 I
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
versity of Western Australia at httpllwwweeuwaed- uaui-escl Given this availability advantage can be taken of the developments of this paper without the need for the extensive neural network training that is involved
Comprehensive comparisons reported in the paper confirm that the consistently high accuracy of the new method over the wide range of frequencies likely to arise in practice is achieved with low computing time overheads On a SUN SPARC 4 workstation the com- puting time for a representative evaluation involving 600 frequency points as a part of cable calculations is about 6s in comparison with about 2h when the infi- nite integrals involved are evaluated directly by numer- ical integration
No previously published work of which the author is aware has referred to the developments to which the present paper is devoted
2 Previous research
The infinite integral encountered in the expression for earth-return path mutual impedance associated with cable j and cable k is [ 1 51
x exp ( - 2 h amp d m ) cos ( x j k h u ) d u
(1) In eqn 1
X j k = horizontal distance between cable j and cable k hj + h k h ___
2 where hj and hk are the depths below the earth plane of cables j and k respectively and
In eqn 3 w is the angular frequency pr is the earth resistivity and
For self impedance hj = hk and xjk is the outer radius of the cable
Attempts have been made to expand the infinite inte- gral JG k) in eqn 1 in terms of an infinite series [l] However rapid convergence of the series is confined to the low frequency range only For a limited range of frequencies earth resistivity and cable separations the following closed-form function has been derived from the series expansion to approximate the earth-return path impedance [l]
= 416 x 10-7Hm
where y is Eulers constant Due to its simplicity the approximation in eqn 4 is widely used
Recently another closed-form approximation has been derived for the earth-return path impedance of underground cables [2 ] Errors encountered in this approximation increase with frequency particularly for frequencies greater than about 1OkHz
There has also been a proposal to use the concept of complex depth [3 4 81 Drawing on this concept an expression for earth-return path self impedance has
628
been derived but that for the mutual impedance has not yet been established [3 41
As the infinite series derived by Carson [9] to repre- sent the infinite integral in the overhead line earth- return path impedance converges rapidly there has even been a proposal [ 3 ] to approximate the integral for underground cable in eqn 1 by that derived by Carson for overhead lines With this approximation the rapidly convergent infinite series for overhead lines are used to evaluate the underground cable earth- return path impedances However this approach leads to significant error as the frequency increases
All of the previous methods have led to simplifica- tions in the evaluations of the infinite integrals with low computing time requirements However the meth- ods can lead to significant errors particularly in the high frequency range
This paper proposes an entirely new method for eval- uating underground cable earth-return path imped- ances which has a very high degree of accuracy and low computing time requirements Earth-return path impedance evaluations are achieved by neural network simulations in software
3 networks
Nonlinear function representation by neural
On the basis of eqn 1 the integral JG k) is a nonlinear function of hda and xJI(dcL If M is this nonlinear function then
J ( j k ) = Al (hnh Xjkamp) (5) A multilayer feedforward neural network (MFNN) has the powerful property that through training it can represent any nonlinear function of any degree of com- plexity [6 71 Based on this nonlinear mapping prop- erty it is proposed to use MFNNs to represent the nonlinear function M(hda xikda) in eqn 5 This is to be achieved by training the MFNNs After training the MFNNs are used to evaluate the infinite integral JG k ) for any particular underground cable
increasina level
input layer first hidden second hidden output layer (lowest level) layer layer (highest level)
Fig 1 0 node (neuron)
Multilayer jeeciforwarri neuml network
4 Neural networks
A typical feedforward layered neural network is shown in Fig 1 Including the input and output layers there are four layers in this example Each layer can have any number of nodes (neurons) The network has a hierarchical structure The layer at the highest level is the output layer That at the lowest level is the input layer Layers which are internal to the network are referred to as internal or hidden layers The level of layers increases from the lowest level of the input layer to the highest level of the output layer
IEE Proc-Giver Tiuniin Dutrib Vol 145 No 6 Noieinhei 1998
As in Fig 2 the interconnection between two nodes j and i is a unidirectional one The interconnection from node j to node i carries the implication that the output from node j multiplied by a weighting coefficient wji is input to node i In a feedforward network only out- puts from nodes at a lower layer can be connected to nodes at a higher level
j I
a F 0 Wi
Fig2 Interconnection from node j to node i w weighting coefficient associated with interconnection from node j to node i
output of node i
Fig3 wjr weighting coefficient associated with interconnection from node j to node i
Inputs and output for node i
We now give the input and output relationships for any node i which is not in the input layer Each node i has a number of input connections to it as in Fig 3 If yj is the output of node j connected to node i and wji is the weight associated with that connection then the net input ui for node i is formed from the weighted sum
J
The summation in eqn 6 extends over all nodesjs that have input connections to node i The output y of node i is a nonlinear function of its net input U
(7) In eqn 7 denotes a nonlinear function and Oi is the threshold associated with node i A commonly used nonlinear continuous function for a node is the sigmoid function For node i
n
The numerical value of J in eqn 8 is in the range -1 - 1 The output yi of node i can then be connected to other nodes in the layers at a higher level than that for node i When node i is in the input layer its output is the same as the input The role of nodes in the input layer is that of connecting specified inputs to other nodes in the higher layers Fig 1 shows that each input node is connected to every node in the first hidden layer In general input nodes can be connected to any other node in a higher layer
In a feedforward network when its weights and thresholds are known values for outputs in the output layer can be evaluated once inputs are specified at the input layer Weights and thresholds are the parameters of the neural network
For a given neural network structure and its parame- ters the output ykp of node k in the output layer is a
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998
nonlinear function of the inputs to the input layer of the network We denote by xp the vector of network inputs The dimension of this vector sets the number of nodes in the input layer We use for the nonlinear rela- tionship between output ykp and vector xp
The mapping function gk derives from the inputoutput relationship for each node as in eqn 8 the weightings of connections of the network and the network struc- ture expressed in terms of the number of layers the number of nodes in them and the interconnections between nodes
Eqn 9 relates to one node in the output layer We now extend this expression We use yp for the vector of the outputs for all nodes in the output layer so that it is a vector of outputs for the complete network Retain- ing xp for the vector of inputs we use for the overall mapping which the network provides
Beginning with a selected network structure the step of synthesising a neural network to meet the requirements of a particular application involves finding the weights of all the connections and the thresholds for all the nodes (ei in eqn 8) so that the mapping which the net- work provides in eqn 10 matches that of the applica- tion for which the network is intended This is referred to as the training phase
In the actual system to which the neural network is to be applied there is a known vector of outputs for a given vector of inputs The training set is a collection of inputoutput vector pairs on which the training is to be based The training set should be large enough to ensure that the network when trained closely matches the requirements of its application
The weightings of the connections between nodes together with the threshold values for all nodes are found by first forming an error between outputs speci- fied from the application for given inputs and those given by the neural network in response to the same inputs
As in eqn 9 yk denotes the output of node k in the output layer as given by the response of the neural net- work to a given input vector We denote by dkp the out- put specified for the node from the requirements of the application The error for this node is then dk - ykp Using a quadratic form for minimisation purposes leads to the error function for all nodes in the output layer
Using eqn 11 we can form Ep for each and every case in the training set The output yk is formed for each case from the neural network response Output dkp is specified for each case from the application in which the network is to be used A total error function for all cases in the training set is then formed from
Er = C E p (12) P
In the present work the total error function E is minimised using the quasi-Newton method [lo] This appears to offer advantage in terms of the rate at which the minimisation sequence converges in compari- son with that of the standard backpropagation training algorithm based on the first-order gradient descent
629
method [l 11 The gradient evaluations which the quasi- Newton method requires are summarised in the Appen- dix (Section 1 11)
The threshold Oi associated with node i can be repre- sented equivalently by a bias node with constant input equal to 10 and the connection from the bias node to node i This approach is adopted in the present work The training then involves the finding of weighting coefficients only including those from the bias node to other nodes
By progressively increasing the number of hidden layers in a feedforward neural network and the number of nodes in them the complexity of mapping of finite dimension that can be achieved is progressively expanded [6 71
5 path impedances
The first stage in synthesising neural networks to repre- sent the nonlinear function M(hdu xjkdu) in eqn 5 is to form a training set The inputs to the neural net- works are hdu and xjkdu and the output is JG k) The integral in eqn 1 is integrated numerically for a wide range of hdu and xjkdu to form the training set using the procedure in [5] The range of hldu in the evalua- tion is from 0 to 6 From numerical evaluation it has been confirmed that JG k ) tends to zero when hdu is greater than about 60 irrespective of xjkdu Therefore there is no need to evaluate JG k ) for h77lu beyond 60 For each value of hdu the integral is evaluated for a range of xjkdu from 0 to 72 corresponding to the ratio xjklh (ie ratio of horizontal separation to mean depth of two-phase cables) in the range 0 - 12 There are 18 662 training cases in total These training cases need only be formed once
In principle a single neural network with two inputs hdu and xjkdu can be trained to represent the com- plex function M expressed in terms of its real and imaginary parts The outputs of the neural network are then the real and imaginary parts of M The neural net- work is a two-input two-output system
Alternatively the nonlinear function M can be repre- sented by two separate neural networks The first neu- ral network is trained to represent the real part of M and the second one the imaginary part of M There are then two two-input single-output neural networks Each neural network receives hdu and xikdu as inputs
In the present work the second approach is adopted in which separate neural networks represent the real and imaginary parts of function M Many experiments have been carried out which have confirmed that this approach gives better convergence in training
By experiment it has been found that by separating into two ranges of hdu better convergence in training was achieved One set of neural networks represents the function M for hdu in the range 0 - 2 and the xjkhJl ratio in the range 0 - 12 Another set of neural net- works represents the function M for hldu in the range 2 - 6 and xjkhIn ratio in the range 0 - 12 There are four neural networks (i) Neural network RL This neural network represents the real part of function M in the range 0 I hdu I 2 and I xjkh I 12 (ii) Neural network RH This neural network represents the real part of function M in the range 2 s hzu I 6 and 0 I xjklh I 12
Neural networks for evaluating earth-return
630
(iii) Neural network ZL This neural network represents the imaginary part of function M in the range 0 I hdu I 2 and 0 I xJkhn I 12 (iv) Neural network IH This neural network represents the imaginary part of function M in the range 2 I hlu I 6 and 0 I xlkhnl I 12 When hdu gt 6 the integral JG k ) is set to zero irre- spective of x du
Prior to training hdu and xJkdu are scaled down by a factor of 300 so that the inputs to neural networks have values less than 1 Since the output is well inside the range -1 - 1 there is no need for output scaling
The second stage is to determine the configuration of the neural networks the number of layers and the number of nodes in each layer The neural network structure is that of MFNN of the form in Fig 1 Including the bias node with a constant input of 10 there are three nodes in the input layer of each neural network The output layer of each neural network has one node The number of hidden layers and number of nodes in each hidden layer are determined by succes- sive trials and corrections to achieve low errors both in training and subsequent testing
After training and testing many MFNNs with differ- ent numbers of hidden layers and different numbers of nodes in each hidden layer the final configuration that has the minimum number of hidden layers and the minimum number of nodes in each hidden layer which achieve both low training errors and low test errors is that shown in Fig 4 Each of the four neural networks RL RH IL and IH in (i)-(iv) has the configuration of Fig 4 However their weighting coefficients are differ- ent
In total the neural network structure in Fig 4 has 36 nodes Including the bias node the input layer has three nodes The output layer has one node The first hidden layer has 20 nodes and the second hidden layer has 12 nodes The bias node has connections to all hid- den nodes and to the output node Each neural net- work has 325 weighting coefficients
The inputs x1 and x2 are given as
Jk
The scaling factor s in eqns 13 and 14 is set to 300 This ensures that the neural network inputs are well within the range [-1 11
6 Representative results
For the purpose of evaluations a three-phase 150kV cable [12] is used It is at a depth below ground of lm Individual phase cables are in horizontal formation and have a separation of 035m The remaining cable data [12] is summarised in the Appendix (Section 112)
Table 1 gives the earth-return path self impedances of each phase cable for a range of frequency from 50Hz to lMHz and earth resistivity in the range 1 ~
l00Qm The self impedances are first calculated using neural networks which have been trained in Section 5 They are then compared with the impedances evaluated using numerical integration
Table 2 gives the earth-return path mutual imped- ances between two adjacent phase cables The results
IEE Ptoc -Gener Trcinsm Drstrrh Vol 145 No 6 Noveinber 1998
output
Fig 4 bias node 3 has connections to all nodes in hidden layers and to output node x = ha30 x2 = ~~Uni30
Structure of neural networks for evuluuting the iizJinite integrul
Table 1 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 1 o5 106
1 o3 105 106
1 o3 1 o5 106
50
50
Earth resistivity Qm Neural network Numerical integration
Earth-return path self impedance Bm
1 1 1 1 20 20 20 20 100 100 100 100
0504 IO-^ + jo482 IO-^ 0107 x
0112 +jO429 0966 + j2816 0494 x +jO575 x
0101 x IO- + j0965 x
01 13 + j0656 1166 +j4815 0490 x
0993 x
0107 + j0768 1158 + i6050
+ j0768 x IO-
+ j0628 x IOrdquo + j0107 x IO-rsquo
0504 x + j0482 x
0107 x + j0768 x
0112 + j0429 0966 + j2816 0497 x +jO577 x
0101 x +jO964 x
01 13 + j0656 1166 +j4815 0501 x + j0628 x
0996 x
0107 + j0768 1158 + i6050
+ jO107 x IO-1
from neural networks are also compared with those from numerical integration
In both Tables 1 and 2 neural network solutions are almost identical with numerical integration results However the neural network procedure is about 1000 times faster than evaluations based on direct numerical integration
7 Incorporation into cable parameter calculations
To incorporate the developments of this paper into the existing software for practical underground cable
parameter set evaluations requires software which implements the neural network functions as in eqns 6- 8 Outputs from the neural networks are transferred into the earth-return path section of the composite soft- ware system which provides cable parameter calcula- tions
Starting from existing software for cable parameter calculations one option is to embed the neural network software module in it Alternatively the neural net- work software can be seen as an additional module providing a critical part of overall parameter calcula- tions
IEE Pix-Gener Trunsm Distrib Vol 145 No 6 November I998 63 I
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
As in Fig 2 the interconnection between two nodes j and i is a unidirectional one The interconnection from node j to node i carries the implication that the output from node j multiplied by a weighting coefficient wji is input to node i In a feedforward network only out- puts from nodes at a lower layer can be connected to nodes at a higher level
j I
a F 0 Wi
Fig2 Interconnection from node j to node i w weighting coefficient associated with interconnection from node j to node i
output of node i
Fig3 wjr weighting coefficient associated with interconnection from node j to node i
Inputs and output for node i
We now give the input and output relationships for any node i which is not in the input layer Each node i has a number of input connections to it as in Fig 3 If yj is the output of node j connected to node i and wji is the weight associated with that connection then the net input ui for node i is formed from the weighted sum
J
The summation in eqn 6 extends over all nodesjs that have input connections to node i The output y of node i is a nonlinear function of its net input U
(7) In eqn 7 denotes a nonlinear function and Oi is the threshold associated with node i A commonly used nonlinear continuous function for a node is the sigmoid function For node i
n
The numerical value of J in eqn 8 is in the range -1 - 1 The output yi of node i can then be connected to other nodes in the layers at a higher level than that for node i When node i is in the input layer its output is the same as the input The role of nodes in the input layer is that of connecting specified inputs to other nodes in the higher layers Fig 1 shows that each input node is connected to every node in the first hidden layer In general input nodes can be connected to any other node in a higher layer
In a feedforward network when its weights and thresholds are known values for outputs in the output layer can be evaluated once inputs are specified at the input layer Weights and thresholds are the parameters of the neural network
For a given neural network structure and its parame- ters the output ykp of node k in the output layer is a
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998
nonlinear function of the inputs to the input layer of the network We denote by xp the vector of network inputs The dimension of this vector sets the number of nodes in the input layer We use for the nonlinear rela- tionship between output ykp and vector xp
The mapping function gk derives from the inputoutput relationship for each node as in eqn 8 the weightings of connections of the network and the network struc- ture expressed in terms of the number of layers the number of nodes in them and the interconnections between nodes
Eqn 9 relates to one node in the output layer We now extend this expression We use yp for the vector of the outputs for all nodes in the output layer so that it is a vector of outputs for the complete network Retain- ing xp for the vector of inputs we use for the overall mapping which the network provides
Beginning with a selected network structure the step of synthesising a neural network to meet the requirements of a particular application involves finding the weights of all the connections and the thresholds for all the nodes (ei in eqn 8) so that the mapping which the net- work provides in eqn 10 matches that of the applica- tion for which the network is intended This is referred to as the training phase
In the actual system to which the neural network is to be applied there is a known vector of outputs for a given vector of inputs The training set is a collection of inputoutput vector pairs on which the training is to be based The training set should be large enough to ensure that the network when trained closely matches the requirements of its application
The weightings of the connections between nodes together with the threshold values for all nodes are found by first forming an error between outputs speci- fied from the application for given inputs and those given by the neural network in response to the same inputs
As in eqn 9 yk denotes the output of node k in the output layer as given by the response of the neural net- work to a given input vector We denote by dkp the out- put specified for the node from the requirements of the application The error for this node is then dk - ykp Using a quadratic form for minimisation purposes leads to the error function for all nodes in the output layer
Using eqn 11 we can form Ep for each and every case in the training set The output yk is formed for each case from the neural network response Output dkp is specified for each case from the application in which the network is to be used A total error function for all cases in the training set is then formed from
Er = C E p (12) P
In the present work the total error function E is minimised using the quasi-Newton method [lo] This appears to offer advantage in terms of the rate at which the minimisation sequence converges in compari- son with that of the standard backpropagation training algorithm based on the first-order gradient descent
629
method [l 11 The gradient evaluations which the quasi- Newton method requires are summarised in the Appen- dix (Section 1 11)
The threshold Oi associated with node i can be repre- sented equivalently by a bias node with constant input equal to 10 and the connection from the bias node to node i This approach is adopted in the present work The training then involves the finding of weighting coefficients only including those from the bias node to other nodes
By progressively increasing the number of hidden layers in a feedforward neural network and the number of nodes in them the complexity of mapping of finite dimension that can be achieved is progressively expanded [6 71
5 path impedances
The first stage in synthesising neural networks to repre- sent the nonlinear function M(hdu xjkdu) in eqn 5 is to form a training set The inputs to the neural net- works are hdu and xjkdu and the output is JG k) The integral in eqn 1 is integrated numerically for a wide range of hdu and xjkdu to form the training set using the procedure in [5] The range of hldu in the evalua- tion is from 0 to 6 From numerical evaluation it has been confirmed that JG k ) tends to zero when hdu is greater than about 60 irrespective of xjkdu Therefore there is no need to evaluate JG k ) for h77lu beyond 60 For each value of hdu the integral is evaluated for a range of xjkdu from 0 to 72 corresponding to the ratio xjklh (ie ratio of horizontal separation to mean depth of two-phase cables) in the range 0 - 12 There are 18 662 training cases in total These training cases need only be formed once
In principle a single neural network with two inputs hdu and xjkdu can be trained to represent the com- plex function M expressed in terms of its real and imaginary parts The outputs of the neural network are then the real and imaginary parts of M The neural net- work is a two-input two-output system
Alternatively the nonlinear function M can be repre- sented by two separate neural networks The first neu- ral network is trained to represent the real part of M and the second one the imaginary part of M There are then two two-input single-output neural networks Each neural network receives hdu and xikdu as inputs
In the present work the second approach is adopted in which separate neural networks represent the real and imaginary parts of function M Many experiments have been carried out which have confirmed that this approach gives better convergence in training
By experiment it has been found that by separating into two ranges of hdu better convergence in training was achieved One set of neural networks represents the function M for hdu in the range 0 - 2 and the xjkhJl ratio in the range 0 - 12 Another set of neural net- works represents the function M for hldu in the range 2 - 6 and xjkhIn ratio in the range 0 - 12 There are four neural networks (i) Neural network RL This neural network represents the real part of function M in the range 0 I hdu I 2 and I xjkh I 12 (ii) Neural network RH This neural network represents the real part of function M in the range 2 s hzu I 6 and 0 I xjklh I 12
Neural networks for evaluating earth-return
630
(iii) Neural network ZL This neural network represents the imaginary part of function M in the range 0 I hdu I 2 and 0 I xJkhn I 12 (iv) Neural network IH This neural network represents the imaginary part of function M in the range 2 I hlu I 6 and 0 I xlkhnl I 12 When hdu gt 6 the integral JG k ) is set to zero irre- spective of x du
Prior to training hdu and xJkdu are scaled down by a factor of 300 so that the inputs to neural networks have values less than 1 Since the output is well inside the range -1 - 1 there is no need for output scaling
The second stage is to determine the configuration of the neural networks the number of layers and the number of nodes in each layer The neural network structure is that of MFNN of the form in Fig 1 Including the bias node with a constant input of 10 there are three nodes in the input layer of each neural network The output layer of each neural network has one node The number of hidden layers and number of nodes in each hidden layer are determined by succes- sive trials and corrections to achieve low errors both in training and subsequent testing
After training and testing many MFNNs with differ- ent numbers of hidden layers and different numbers of nodes in each hidden layer the final configuration that has the minimum number of hidden layers and the minimum number of nodes in each hidden layer which achieve both low training errors and low test errors is that shown in Fig 4 Each of the four neural networks RL RH IL and IH in (i)-(iv) has the configuration of Fig 4 However their weighting coefficients are differ- ent
In total the neural network structure in Fig 4 has 36 nodes Including the bias node the input layer has three nodes The output layer has one node The first hidden layer has 20 nodes and the second hidden layer has 12 nodes The bias node has connections to all hid- den nodes and to the output node Each neural net- work has 325 weighting coefficients
The inputs x1 and x2 are given as
Jk
The scaling factor s in eqns 13 and 14 is set to 300 This ensures that the neural network inputs are well within the range [-1 11
6 Representative results
For the purpose of evaluations a three-phase 150kV cable [12] is used It is at a depth below ground of lm Individual phase cables are in horizontal formation and have a separation of 035m The remaining cable data [12] is summarised in the Appendix (Section 112)
Table 1 gives the earth-return path self impedances of each phase cable for a range of frequency from 50Hz to lMHz and earth resistivity in the range 1 ~
l00Qm The self impedances are first calculated using neural networks which have been trained in Section 5 They are then compared with the impedances evaluated using numerical integration
Table 2 gives the earth-return path mutual imped- ances between two adjacent phase cables The results
IEE Ptoc -Gener Trcinsm Drstrrh Vol 145 No 6 Noveinber 1998
output
Fig 4 bias node 3 has connections to all nodes in hidden layers and to output node x = ha30 x2 = ~~Uni30
Structure of neural networks for evuluuting the iizJinite integrul
Table 1 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 1 o5 106
1 o3 105 106
1 o3 1 o5 106
50
50
Earth resistivity Qm Neural network Numerical integration
Earth-return path self impedance Bm
1 1 1 1 20 20 20 20 100 100 100 100
0504 IO-^ + jo482 IO-^ 0107 x
0112 +jO429 0966 + j2816 0494 x +jO575 x
0101 x IO- + j0965 x
01 13 + j0656 1166 +j4815 0490 x
0993 x
0107 + j0768 1158 + i6050
+ j0768 x IO-
+ j0628 x IOrdquo + j0107 x IO-rsquo
0504 x + j0482 x
0107 x + j0768 x
0112 + j0429 0966 + j2816 0497 x +jO577 x
0101 x +jO964 x
01 13 + j0656 1166 +j4815 0501 x + j0628 x
0996 x
0107 + j0768 1158 + i6050
+ jO107 x IO-1
from neural networks are also compared with those from numerical integration
In both Tables 1 and 2 neural network solutions are almost identical with numerical integration results However the neural network procedure is about 1000 times faster than evaluations based on direct numerical integration
7 Incorporation into cable parameter calculations
To incorporate the developments of this paper into the existing software for practical underground cable
parameter set evaluations requires software which implements the neural network functions as in eqns 6- 8 Outputs from the neural networks are transferred into the earth-return path section of the composite soft- ware system which provides cable parameter calcula- tions
Starting from existing software for cable parameter calculations one option is to embed the neural network software module in it Alternatively the neural net- work software can be seen as an additional module providing a critical part of overall parameter calcula- tions
IEE Pix-Gener Trunsm Distrib Vol 145 No 6 November I998 63 I
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
method [l 11 The gradient evaluations which the quasi- Newton method requires are summarised in the Appen- dix (Section 1 11)
The threshold Oi associated with node i can be repre- sented equivalently by a bias node with constant input equal to 10 and the connection from the bias node to node i This approach is adopted in the present work The training then involves the finding of weighting coefficients only including those from the bias node to other nodes
By progressively increasing the number of hidden layers in a feedforward neural network and the number of nodes in them the complexity of mapping of finite dimension that can be achieved is progressively expanded [6 71
5 path impedances
The first stage in synthesising neural networks to repre- sent the nonlinear function M(hdu xjkdu) in eqn 5 is to form a training set The inputs to the neural net- works are hdu and xjkdu and the output is JG k) The integral in eqn 1 is integrated numerically for a wide range of hdu and xjkdu to form the training set using the procedure in [5] The range of hldu in the evalua- tion is from 0 to 6 From numerical evaluation it has been confirmed that JG k ) tends to zero when hdu is greater than about 60 irrespective of xjkdu Therefore there is no need to evaluate JG k ) for h77lu beyond 60 For each value of hdu the integral is evaluated for a range of xjkdu from 0 to 72 corresponding to the ratio xjklh (ie ratio of horizontal separation to mean depth of two-phase cables) in the range 0 - 12 There are 18 662 training cases in total These training cases need only be formed once
In principle a single neural network with two inputs hdu and xjkdu can be trained to represent the com- plex function M expressed in terms of its real and imaginary parts The outputs of the neural network are then the real and imaginary parts of M The neural net- work is a two-input two-output system
Alternatively the nonlinear function M can be repre- sented by two separate neural networks The first neu- ral network is trained to represent the real part of M and the second one the imaginary part of M There are then two two-input single-output neural networks Each neural network receives hdu and xikdu as inputs
In the present work the second approach is adopted in which separate neural networks represent the real and imaginary parts of function M Many experiments have been carried out which have confirmed that this approach gives better convergence in training
By experiment it has been found that by separating into two ranges of hdu better convergence in training was achieved One set of neural networks represents the function M for hdu in the range 0 - 2 and the xjkhJl ratio in the range 0 - 12 Another set of neural net- works represents the function M for hldu in the range 2 - 6 and xjkhIn ratio in the range 0 - 12 There are four neural networks (i) Neural network RL This neural network represents the real part of function M in the range 0 I hdu I 2 and I xjkh I 12 (ii) Neural network RH This neural network represents the real part of function M in the range 2 s hzu I 6 and 0 I xjklh I 12
Neural networks for evaluating earth-return
630
(iii) Neural network ZL This neural network represents the imaginary part of function M in the range 0 I hdu I 2 and 0 I xJkhn I 12 (iv) Neural network IH This neural network represents the imaginary part of function M in the range 2 I hlu I 6 and 0 I xlkhnl I 12 When hdu gt 6 the integral JG k ) is set to zero irre- spective of x du
Prior to training hdu and xJkdu are scaled down by a factor of 300 so that the inputs to neural networks have values less than 1 Since the output is well inside the range -1 - 1 there is no need for output scaling
The second stage is to determine the configuration of the neural networks the number of layers and the number of nodes in each layer The neural network structure is that of MFNN of the form in Fig 1 Including the bias node with a constant input of 10 there are three nodes in the input layer of each neural network The output layer of each neural network has one node The number of hidden layers and number of nodes in each hidden layer are determined by succes- sive trials and corrections to achieve low errors both in training and subsequent testing
After training and testing many MFNNs with differ- ent numbers of hidden layers and different numbers of nodes in each hidden layer the final configuration that has the minimum number of hidden layers and the minimum number of nodes in each hidden layer which achieve both low training errors and low test errors is that shown in Fig 4 Each of the four neural networks RL RH IL and IH in (i)-(iv) has the configuration of Fig 4 However their weighting coefficients are differ- ent
In total the neural network structure in Fig 4 has 36 nodes Including the bias node the input layer has three nodes The output layer has one node The first hidden layer has 20 nodes and the second hidden layer has 12 nodes The bias node has connections to all hid- den nodes and to the output node Each neural net- work has 325 weighting coefficients
The inputs x1 and x2 are given as
Jk
The scaling factor s in eqns 13 and 14 is set to 300 This ensures that the neural network inputs are well within the range [-1 11
6 Representative results
For the purpose of evaluations a three-phase 150kV cable [12] is used It is at a depth below ground of lm Individual phase cables are in horizontal formation and have a separation of 035m The remaining cable data [12] is summarised in the Appendix (Section 112)
Table 1 gives the earth-return path self impedances of each phase cable for a range of frequency from 50Hz to lMHz and earth resistivity in the range 1 ~
l00Qm The self impedances are first calculated using neural networks which have been trained in Section 5 They are then compared with the impedances evaluated using numerical integration
Table 2 gives the earth-return path mutual imped- ances between two adjacent phase cables The results
IEE Ptoc -Gener Trcinsm Drstrrh Vol 145 No 6 Noveinber 1998
output
Fig 4 bias node 3 has connections to all nodes in hidden layers and to output node x = ha30 x2 = ~~Uni30
Structure of neural networks for evuluuting the iizJinite integrul
Table 1 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 1 o5 106
1 o3 105 106
1 o3 1 o5 106
50
50
Earth resistivity Qm Neural network Numerical integration
Earth-return path self impedance Bm
1 1 1 1 20 20 20 20 100 100 100 100
0504 IO-^ + jo482 IO-^ 0107 x
0112 +jO429 0966 + j2816 0494 x +jO575 x
0101 x IO- + j0965 x
01 13 + j0656 1166 +j4815 0490 x
0993 x
0107 + j0768 1158 + i6050
+ j0768 x IO-
+ j0628 x IOrdquo + j0107 x IO-rsquo
0504 x + j0482 x
0107 x + j0768 x
0112 + j0429 0966 + j2816 0497 x +jO577 x
0101 x +jO964 x
01 13 + j0656 1166 +j4815 0501 x + j0628 x
0996 x
0107 + j0768 1158 + i6050
+ jO107 x IO-1
from neural networks are also compared with those from numerical integration
In both Tables 1 and 2 neural network solutions are almost identical with numerical integration results However the neural network procedure is about 1000 times faster than evaluations based on direct numerical integration
7 Incorporation into cable parameter calculations
To incorporate the developments of this paper into the existing software for practical underground cable
parameter set evaluations requires software which implements the neural network functions as in eqns 6- 8 Outputs from the neural networks are transferred into the earth-return path section of the composite soft- ware system which provides cable parameter calcula- tions
Starting from existing software for cable parameter calculations one option is to embed the neural network software module in it Alternatively the neural net- work software can be seen as an additional module providing a critical part of overall parameter calcula- tions
IEE Pix-Gener Trunsm Distrib Vol 145 No 6 November I998 63 I
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
output
Fig 4 bias node 3 has connections to all nodes in hidden layers and to output node x = ha30 x2 = ~~Uni30
Structure of neural networks for evuluuting the iizJinite integrul
Table 1 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 1 o5 106
1 o3 105 106
1 o3 1 o5 106
50
50
Earth resistivity Qm Neural network Numerical integration
Earth-return path self impedance Bm
1 1 1 1 20 20 20 20 100 100 100 100
0504 IO-^ + jo482 IO-^ 0107 x
0112 +jO429 0966 + j2816 0494 x +jO575 x
0101 x IO- + j0965 x
01 13 + j0656 1166 +j4815 0490 x
0993 x
0107 + j0768 1158 + i6050
+ j0768 x IO-
+ j0628 x IOrdquo + j0107 x IO-rsquo
0504 x + j0482 x
0107 x + j0768 x
0112 + j0429 0966 + j2816 0497 x +jO577 x
0101 x +jO964 x
01 13 + j0656 1166 +j4815 0501 x + j0628 x
0996 x
0107 + j0768 1158 + i6050
+ jO107 x IO-1
from neural networks are also compared with those from numerical integration
In both Tables 1 and 2 neural network solutions are almost identical with numerical integration results However the neural network procedure is about 1000 times faster than evaluations based on direct numerical integration
7 Incorporation into cable parameter calculations
To incorporate the developments of this paper into the existing software for practical underground cable
parameter set evaluations requires software which implements the neural network functions as in eqns 6- 8 Outputs from the neural networks are transferred into the earth-return path section of the composite soft- ware system which provides cable parameter calcula- tions
Starting from existing software for cable parameter calculations one option is to embed the neural network software module in it Alternatively the neural net- work software can be seen as an additional module providing a critical part of overall parameter calcula- tions
IEE Pix-Gener Trunsm Distrib Vol 145 No 6 November I998 63 I
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
Table 2 Comparison between neural network and numerical integration methods
Frequency Hz
50 1 o3 105 106 50 1 o3 1 o5 106
1 o3 1 o5 106
50
Earth resistivity Qm Neural network Numerical integration
1 0504 x + j0349 x 0504 x + j0349 x
1 0107 x IO-rsquo+ j0503 x IO-rsquo 0107 x + j0503 x
1 0105 +jO166 0105 + j0166 1 0623 + j0372 0623 + j0372 20 0495 x + j0441 x IOrdquo 0497 x + j0445 x
20 20 0113 + j0391 0113 + j0391 20 1127 + j2175 1127 + j2175 100 0492 x + j0494 x IOW3 0501 x + j0495 x
100 0997 x + j0800 x IO-rsquo 0997 x + j0801 x IO-rsquo 100 0107 + j0503 0107 + j0503 100 1149 + $3402 1149 +~3402
Earth-return path mutual impedance Qm
0101 x IO-rsquo+ j0699 x IO-rsquo 0101 x IO-rsquo +j0699 x IO-rsquo
8 Conclusions
The outcome of the work reported in this paper is an array of neural networks that evaluates the infinite integrals in the expressions for underground cable earth-return path impedances The neural networks have been fully determined in terms of their topology and weighting coefficients They have immediate appli- cation in underground cable parameter evaluations Extensive testing has confirmed that the neural net- work solutions are of consistently high accuracy They are almost identical with the numerical integration results for a wide range of frequencies earth resistivity and cable configuration Computation by neural net- works is however about 1000 times faster than that by numerical integration The extensive neural network training that this requires can be avoided by using the set of universal weighting coefficients to which the research reported in this paper has led and which are held for general access and use at httpwwweeu- waeduauf-escl
9 Acknowledgments
The author wishes to thank the University of Western Australia for permission to publish this paper He also wishes to thank Professor WD Humpage for discus- sions relating to the developments of the paper and for his many contributions to its preparation
10 References
1 WEDEPOHL LM and WILCOX DJ lsquoTransient analysis of underground power-transmission systems - system-model and wave-propagation characteristicsrsquo Proc IEE 1973 120 (2) pp 253-260
2 SAAD O GABA G and GIROUX M lsquoA closed-form approximation for ground return impedance of underground cablesrsquo IEEE Trans 1996 PWD-11 (3) pp 1536-1545 DOMMEL HW lsquoEMTP Reference Manual Vol 3rsquo (Bonneville Power Administration 1986) SEMLYEN A lsquoDiscussion on rsquoOverhead line parameters from handbook formulas and computer programsrsquo by Dommel HWrsquo IEEE Trans 1985 PAS-104 (2) pp 366-372
5 NGUYEN TT lsquoEarth-return path impedances of underground cables Part 1 Numerical integration of infinite integralsrsquo Proc IEE ((5470C)) HECHT-NIELSEN R lsquoKolmogorovrsquos mapping neural network existence theoremrsquo Proceedings of 1987 IEEE international con- ference on Neural networks 1987 (IEEE Press) Vol 3 pp 11-13
3
4
6
632
7 HECHT-NIELSEN R lsquoTheory of the backpropagation neural networkrsquo Proceedings of the international joint conference on Neurul networks 1989 Vol 1 pp 593-605
8 DERI A TEVAN G SEMLYEN A and CASTANHEI- RA A lsquoThe complex ground return plane - A simplified model for homogeneous and multi-layer earth returnrsquo IEEE Trans 1981 PAS-100 (8) pp 3686-3693
9 CARSON JR lsquoWave propagation in overhead wires with ground returnrsquo Bell Syst Tech J 1926 5 pp 539-554
10 FLETCHER R lsquoA new approach to the variable metric algo- rithmrsquo Comput J 1970 13 pp 317-322
11 RUMELHART DE HINTON GE and WILLIAMS RG Learning internal representation by error propagationrsquo in lsquoDis-
tributed parallel processing explorations in the microstructure of cognition Vol 1rsquo (MIT Press Cambridge 1986) Chap 8 pp 3 1 8-36 1 ~~ ~ ~
12 KERSTEN WFJ lsquoSurge arresters for sheath protection in cross-bonded cable systemrsquo Proc IEE 1979 126 (12) pp 1255 1262
11 Appendices
11 I Gradient evaluations From eqn 12 the total error function is given by
E~ = C E ~ (15 P
where
The partial derivative of the total error function ET in eqn 15 with respect to weight wii between nodes i and j is formed from
(17) P
Individual partial derivative dEpawii in eqn 17 is given in the following based on backward error propaga- tions [l l]
If node j is in a hidden layer tijp in eqn 18 is evaluated recursively from
m
The summation in eqn 19 extends over all nodes ms that have connections from node j incident to them
IEE Proc-Gener Transm Disrrih Vol 145 No 6 November 1998
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633
In the derivation of eqn 19 it has been assumed that each node apart from nodes in the input layer has a nonlinear processing function given in eqn 8
If node j is in the output layer Sjp is given directly by
Sip is found recursively starting from nodes in the out- put layer
Sip in eqn 20 represents a measure of error for node j in the output layer it is derived from the difference between the required or specified output djP and the neural network output yip When node j is in a hidden layer 8 is evaluated recursively as given in eqn 19 based on amps for nodes m in higher layers From this we can interpret that measures of error for nodes in the output layer have propagated back to those in hidden layers Gradient evaluations which are required for neural network training are based on these lsquoerrorsrsquo 4 s as shown in eqn 18 For this reason the method is often referred to as backward error propagation
The outputs of nodes in the hidden layers and in the output layer which are required for the evaluations of
elements of the gradient vector are calculated from the input vector xp and the most recently available neural network parameters using eqns 6-8
In the feedforward structure we start the evaluation of outputs of nodes in the first hidden layer Outputs of nodes in the second hidden layer are calculated next Evaluations proceed in this way until outputs for nodes in the last layer (ie the output layer) are calculated
112 Underground cable data
Table 3 Core and sheath data
Core radius
Radius of main insulation
Sheath outer radius
Radius of outer insulation
Core cross section
Resistivity of core (copper)
Resistivity of sheath (lead)
Permittivity of main insulation
Permittivity of outer insulation
19cm
345cm
385cm
425cm
80cm2
17 x 1O4C2m
21 10-~52m
45 x ampO
35 X EO
IEE Proc-Gener Transm Distrib Vol 145 No 6 November 1998 633