+ All Categories
Home > Documents > Extension neural network for power transformer incipient fault diagnosis

Extension neural network for power transformer incipient fault diagnosis

Date post: 21-Sep-2016
Category:
Upload: m-h
View: 216 times
Download: 2 times
Share this document with a friend
7
Extension neural network for power transformer incipient fault diagnosis M.-H. Wang Abstract An extension neural network (ENN-based diagnosis system for power transformer incipient fault detection is presented. The ENN proposed is a Combination of extension theory and a neural network. Using an innovative extension distance instead of Euclidean distance (ED) to measure the similarity between tested data and the cluster centre, it can effect supervised leaming and achieve shorter Ieaming times than traditional neural networks. Moreover, the ENN has the advantage of height accuracy and error tolerance. Thus, the incipient faults of power transformers can be diagnosed quickly and accurately. To demonstrate the effectiveness of the proposed method, 40 sets of field DGA data from power transformers in Australia, China, and Taiwan have been tested. The test results confirm that the proposed method has given promising results. 1 Introduction Large power transformers are essential equipment in power systems. Failure of a power transformer may cause a break in power supply and loss of profits. Therefore, it is vital to detect incipient failures in power transformers as early as possible, in order to switch them safely and improve the reliability of power systems. Therefore, the monitoring and diagnostic techniques of power transformers have attracted considerable attention for many years [IM]. Most large power transformers use mineral oil as a dielectric fluid and coolant; a long in-service transformer is subject to electrical and thermal stresses, which may form by-product gases which indicate the type of incipient failure. Dissolved gas analysis (DGA) (I, 3, SI is a common practice in the incipient fault diagnosis of power transformers. It tests and periodically samples the insulation oil of transformers to obtain the constituent gases in the oil, which are formed due to a breakdown of the insulating materials inside. Study results indicate that corona, over- heating and arcing are the three main causes for insulation degradation in a transformer [2-51. The fault related gases include hydrogen (Hz), methane (CH,), acetylene (C2Hd, ethane (C2H6), carbon monoxide (CO), and carbon dioxide (C02). In the past, various fault diagnosis techniques have been proposed, including the conventional key gas method, ratio method [I, 31 and recently expert systems [4, q, fuzzy logk [7-9], neural networks 1101 and combinations of fwzy logic and an AI approach have given promising results [I 1-13]. The conventional key gas or ratio method is based on experience in fault diagnosis using DGA data, which may vary from utility to utility due to the heuristic nature of the 0 IEE, 2W3 1EE Proceedings online no. 2W30901 dok IO. 1049/ip-gfd20030901 Paper received 6th November 2002. Online publishing date: 16 October 2W3 The author is with the Depamenr of Electtical Engineering, National Chin-Yi Institute of Technology 35, 215 Lane, Sec. I, Chung Shan Road, Taiping, Taichung. Taiwan, ROC IEE Proc-Gener. Trmm. Diflr&., Vol. 150, No. 6, Nommber 2W3 method and the fact that no general mathematical formulation can be utilised. The expert system and fuuy logic approaches involve human expertise, and have been successfully applied in this field. However, there are some intrinsic shortcomings, such as the difficulty of acquiring knowledge and maintaining a database. Traditional neural networks (NN) can directly acquire experience from the training data, and overcome some of the shortcomings of the expert system. However, traditional neural networks present strategic difficulties in deciding upon the number of neurons in hidden layers and are time-consuming in training [14]. To overcome the drawbacks of traditional neural networks, a type of neural network is proposed for incipient fault diagnosis of power transformers in this paper. The proposed ENN uses a combination of extension theory [IS] and the neural network. In extension theory, the extended correlation function can he used to calculate the member- ship grade between the tested data and cluster domain, which resembles the membership function of fuzzy theory. Extension theory has given promising results in many fields [15-17]. In this paper, the proposed ENN uses a new extension distance instead of using Euclidean distance (ED) to measure the similarity between tested data and cluster domain. It can quickly and reliably leam to categorise input patterns and permit adaptive processes for significant new information. To demonstrate the effectiveness of the proposed method, 40 sets of field DGA data from power transformers in Australia [9], China [lo], and Taiwan have been tested. Results of the studied cases show that the proposed ENN-based diagnosis method is much more suitable as a practical tool for power transformer incipient fault detection. 2 Outline of extension theory There exist certain problems which cannot be directly solved by given conditions, but which may become easier or solvable through some proper transformation. For exam- ple, the Laplace transformation is one such commonly used technique in engineering fields, and the concept of fuzzy sets is a generalisation of well-known standard sets so as to 679
Transcript

Extension neural network for power transformer incipient fault diagnosis

M.-H. Wang

Abstract An extension neural network (ENN-based diagnosis system for power transformer incipient fault detection is presented. The ENN proposed is a Combination of extension theory and a neural network. Using an innovative extension distance instead of Euclidean distance (ED) to measure the similarity between tested data and the cluster centre, it can effect supervised leaming and achieve shorter Ieaming times than traditional neural networks. Moreover, the ENN has the advantage of height accuracy and error tolerance. Thus, the incipient faults of power transformers can be diagnosed quickly and accurately. To demonstrate the effectiveness of the proposed method, 40 sets of field DGA data from power transformers in Australia, China, and Taiwan have been tested. The test results confirm that the proposed method has given promising results.

1 Introduction

Large power transformers are essential equipment in power systems. Failure of a power transformer may cause a break in power supply and loss of profits. Therefore, it is vital to detect incipient failures in power transformers as early as possible, in order to switch them safely and improve the reliability of power systems. Therefore, the monitoring and diagnostic techniques of power transformers have attracted considerable attention for many years [IM].

Most large power transformers use mineral oil as a dielectric fluid and coolant; a long in-service transformer is subject to electrical and thermal stresses, which may form by-product gases which indicate the type of incipient failure. Dissolved gas analysis (DGA) ( I , 3, SI is a common practice in the incipient fault diagnosis of power transformers. It tests and periodically samples the insulation oil of transformers to obtain the constituent gases in the oil, which are formed due to a breakdown of the insulating materials inside. Study results indicate that corona, over- heating and arcing are the three main causes for insulation degradation in a transformer [2-51. The fault related gases include hydrogen (Hz), methane (CH,), acetylene (C2Hd, ethane (C2H6), carbon monoxide (CO), and carbon dioxide (C02).

In the past, various fault diagnosis techniques have been proposed, including the conventional key gas method, ratio method [I, 31 and recently expert systems [4, q, fuzzy logk [7-9], neural networks 1101 and combinations of fwzy logic and an AI approach have given promising results [I 1-13]. The conventional key gas or ratio method is based on experience in fault diagnosis using DGA data, which may vary from utility to utility due to the heuristic nature of the

0 IEE, 2W3 1EE Proceedings online no. 2W30901 dok IO. 1049/ip-gfd20030901 Paper received 6th November 2002. Online publishing date: 16 October 2W3 The author is with the Depamenr of Electtical Engineering, National Chin-Yi Institute of Technology 35, 215 Lane, Sec. I, Chung Shan Road, Taiping, Taichung. Taiwan, ROC

IEE Proc-Gener. Trmm. Diflr&., Vol. 150, No. 6, Nommber 2W3

method and the fact that no general mathematical formulation can be utilised.

The expert system and fuuy logic approaches involve human expertise, and have been successfully applied in this field. However, there are some intrinsic shortcomings, such as the difficulty of acquiring knowledge and maintaining a database. Traditional neural networks (NN) can directly acquire experience from the training data, and overcome some of the shortcomings of the expert system. However, traditional neural networks present strategic difficulties in deciding upon the number of neurons in hidden layers and are time-consuming in training [14].

To overcome the drawbacks of traditional neural networks, a type of neural network is proposed for incipient fault diagnosis of power transformers in this paper. The proposed ENN uses a combination of extension theory [IS] and the neural network. In extension theory, the extended correlation function can he used to calculate the member- ship grade between the tested data and cluster domain, which resembles the membership function of fuzzy theory. Extension theory has given promising results in many fields [15-17]. In this paper, the proposed ENN uses a new extension distance instead of using Euclidean distance (ED) to measure the similarity between tested data and cluster domain. It can quickly and reliably leam to categorise input patterns and permit adaptive processes for significant new information. To demonstrate the effectiveness of the proposed method, 40 sets of field DGA data from power transformers in Australia [9], China [lo], and Taiwan have been tested. Results of the studied cases show that the proposed ENN-based diagnosis method is much more suitable as a practical tool for power transformer incipient fault detection.

2 Outline of extension theory

There exist certain problems which cannot be directly solved by given conditions, but which may become easier or solvable through some proper transformation. For exam- ple, the Laplace transformation is one such commonly used technique in engineering fields, and the concept of fuzzy sets is a generalisation of well-known standard sets so as to

679

extend application fields. The extension set extends the fuzzy set from [O, I ] to (-m,m) [ 1 5 ] . As a result, we may define a set that includes any data in the domain. On the other hand, extension set theory assigns a membership grade to points, with the convention that grades below - 1 apply to points which definitely cannot be in the set, and grades between - I and 0 are for points which are apparently outside the set but could be member of the set. The grades above 0 denote degrees of membership in the set.

2. I Matter-element theory Defining the name of a matter as N, one of the characteristics of the matter as c, and the value of c as U, a matterelement in extension theory can be described as follows [15-17]:

where N , c, and U are called the three fundamental elements of the matter-element. For example, R = (John, Height, 180cm) can be used to state that John's height is 180cm. Assuming R = (A', c, U ) to be a multidimensional matter- element, C = [c,, c2, . . . , c n ] to be a characteristic vector and V= [U,, u2, . ._, U,,] to be a value vector of C, then a multi- dimensional matterdement is defined as:

R = ( N , c, U) (1)

where RI = (N, c, U) (i= 1, 2, ._., n) is defined as the sub- matter-element of R.

2.2 Summary of extension set theory

2.2. I Definition of extension set: Let U be a space of objects and x a generic element of U. Then, an extension set A in U is defined as a set of ordered pairs as follows:

A = {(x, Y ) I X U , Y = K ( x ) E (-m, w)) (3) where y = K ( x ) is called the correlation function for extension set A [15]. The K(x) maps each element of U to a membership grade between - m and m . An extension set A on U can be denoted by:

A = A + U A ' U A-

A+ = {xlx E U , K ( x ) 2 0)

A' = { xIx E U , 0 2 K ( x ) 2 -1)

(4)

( 5 )

( 6 )

A - = E U , K ( ~ ) < O ) (7)

where

In (5) and (6), A t is called a positive domain on A, and describes the degrees to which x belongs to Xo. A- is called a negative domain on A, and describes the degree to which x does not belong to X,. A' is called an extended domain, which means that the element x still has a chance to become part of the set under condition change, where K(x)<- l , it implies that the element x has no chance to belong to the set.

2.2.2 Primitively extended correlation function: Setting X,= ( w . U), X= ( r , s) and XoeX, the extended function can be defined as follows:

680

where

The correlation function can be used to calculate the membership grade between x and X,. The extended membership function is shown in Fig. I . When K(x)>O, it indicates the degrees to which x belongs to X,. When K(x)<0 it describes the degree to which x does not belong to X".

K(x' t

Fig. 1 Exfen&d membership funcrion

3

There are some classified problems that features the defined range. For example, a youngster can be defined as belonging to a cluster of peoples whose ages are between 12 and 22 years old; the permitted operation voltages of the specified equipment may be between 100 and 120V. For these problems, it is difficult to implement an appropriate classified method using current neural networks. Therefore, a new topology of neural network (E") is proposed to solve these problems. The ENN allows classified problems to have a range of features, supervised learning, continuous input and discrete output. This new neural network involves combination of extension set theory and. the neural network. Extension theory provides a means of distance measurement for classiliation process, and.. the neural network can embed the salient features of parallel computation power and learning capability.

3.1 Structure of the proposed ENN The schematic structure of the proposed ENN is shown in Fig. 2. It consists of both the input layer and the output layer. The nodes in the input layer receive an input pattem and, using a set of weighted parameters, generate an image of the input pattem. There are two connection values (weights) between input nodes and output nodes; one connection represents the up bound for the classical domain (required domain) of the features and the other connection represents the low bound for the required domain of the features. The connections between j-th input nodes and k-th output node are vvb and ub, respectively. This image is further enhanced in the process characterised by the output layer. The output layer is a strong lateral inhibition network where each node in the output layer has selfexcitative feedback. Only one output node in the output layer remains

The proposed extension neural network

IEE Proc-Cenrr. T r a m . Dltlrrb.. Vol. 150, No. 6. Nooeher ZW3

cluster 1 cluster 2 duster m

x; x; ::: Fig. 2

active to indicate a classification of the input pattern. The operation models of the proposed ENN can classify into the learning phase and the operation phase. The learning algorithm of the ENN will be discussed in the following Section.

Structure of the proposed ENN

3.2 Supervised learning algorithm of the proposed ENN The proposed ENN uses a supervised learning algorithm to tune the weights of the ENN to achieve good clustering performance and minimise the clustering error. Before using the algorithm, we deline several variables. Training patterns are set to be X = X I , A',, . . . , X,,, where the total number of training patterns is Np. The i-th pattern is J$={$!, &, . . . , ,&], where n is the total number of features, and p IS the category of i-th pattern. To evaluate the clustering performance, if N, is the total mistake number, then the total mistake rate ET is defined by:

N, E r = - NP

The detailed supervised learning algorithm of the proposed ENN is shown as follows:

Step 1: Set the connection weights between input nodes and output nodes by the matterdement model:

1 cn, Vk 1 where cj is the j-th characteristic (feature) of the cluster k, and Ve= ( wkj, ukj) are the classical domains of cluster k. The range of classical domains can be directly obtained from the previous equation. Step 2: Calculate the initial cluster centres of every cluster:

zk = {zkl, zk2, ... , zh}

zkj = (wkj + ukj)/2

(13)

(14)

fork= I , 2, .._. m; j = I , 2, ..., n Step 3: Read i-th training pattem and its cluster number p :

xp = {x;, xi, . . . , x g } , p t m (15)

IEE Proc.-GeMr Tronmr Dirtrfi.. Vol. 150. No. 6, Nowmber ZW3

Step 4: Using the proposed extension distance (ED), calculate the distance between the training pattern ,I" and cluster k as follows:

fo rk= 1, 2, ..., m where

The proposed extension distance is a modification of (9) and can be drawn as Fig. 3, where EL$ corresponds to the normal distance, describing the distance x and a range V= ( w , U ) . From Fig. 3, the proposed extension distance expresses that the different range can amve at a different distance measurement as a result of different sensitivity, which is a significant advantage in the classified applica- tions. Usually, if the range of features is big, then the requested data are fuzzy or show low sensitivity to distance. Conversely, if the range of features is small, then the requested data are precise or show high sensitivity to distance.

Fig. 3 Illmtration of the proposed ertemion distance (ED)

Step 5: Find the k', such that ED: = min{EDa}. If k* = p

then go top step 7; otherwise, go to step 6. Step 6: Update the weights of cluster p and cluster k' as follows: (a) Update the cluster centres:

k t m

z y = z? +q(x,i; - z T ) (18)

(19) z"" - z;!; - q(x,i; -z$j) Id k*j ~

(h) Update the weights of clusters p and k':

w"gy = w+ + I?(.; - $7) U" = UO" + q(,$ ~ 9")

- - q(x; - G!?)

(20) PI PI

{ E i PI PI

gp k

U n ~ k'j - ~ ,,old k ' j ~ q(x ; -$ !$

~

(21)

Where q is a learning rate, it is set to 0. I in this paper. In this step, the learnins process is only to adjust the weights of clusters p and k , and other weights do not change in the

681

learning. Thus, the proposed ENN has a speed advantage Step 8. If the clustering process has converged, or the total over other neural networks, and can quickly adapt mistake rate ET has arrived at preset values,’end; otherwise, urocesses to sienificant new information. Figure 4 shows return to steu 3. - - the result of tuning two cluster’s weightsjn the learning process; most clearly the changing of E@ and E q . *The cluster of feature .xg will be changed from cluster k to cluster p due to E f l > E D ; . Step 7: Repeat step 3-step 6 until the entire pattern has been through the clustering process, then the learning epoch is finished.

I

cluster k’ cluster p l a

f

f c

b

Fig. 4 Results offminq cluster w e i g h a original condition b after tuning

Table 2: Fault types according to the gas ratio codes

Fault type no. Fault type

The main advantage of the proposed ENN is that the training time is quite economical and the mapping capability is high.

3.3 Operation phase of the proposed ENN Step 1: Read the weights matrix of E”. Step 2: Calculate the initial cluster centres of every cluster as in (13) and (14). Step 3: Redd a tested pattern.

x, = {.GI 1 Xt2, . . . 1 xm) (22) Step 4: Using the proposed extension distance (ED), calculate the distance between the test pattern and every exited cluster with (16). Step 5: Find k’, such that EDp* : min}{EDik}, and set the

cluster k’ = 1 to indicate the cluster of tested pattern. Step 6: If all tested patterns have been classified, end; othenuise, go to step 3.

k E m

4 Proposed ENN-based diagnosis method

In power transformer diagnosis, power companies have used IEC codes widely. From IEC std. 599, the codes of different gas ratios and fault classifications according to the gas ratio codes are shown in Tables 1 and 2, respectively. Although IEC codes are useful for fault diagnosis in transformers, the number of code combinations is larger

Table 1: IEC gas ratio codes

Ranges of the gas ratio Codes of different gas ratios

C2H2 - CHa ~ Cz Hb Cz H, H2 CI He

<0.1

0.1-1

1-3

>3

0 1 0

1 0 0

1 2 1

2 2 2

No fault

<150”C Thermal fault

150”C-300”C Thermal fault

300T-700’C Thermal fault

>700”C Thermal fault

Low energy partial discharges

High energy partial discharges

Low energy discharges

High energy discharges

0

0

0 0 0 0

1

1 or2 1

0

1

0

1

2

0 0 1 o r 2

2

682 IEE Proc.-Genm. T r m . Disrra., Vol. 150, No. 6, Nouember 2003

than that of fault types. Therefore, 'no match' may be indicated in the fault diagnosis. In this Section, a new fault diagnosis method for power transformers is presented using the proposed E m . The proposed fault diagnosis method can be simply described as follows:

Step 1: Build the matter-element models of every fault type according to the field experience or IEC codes. The proposed matter-element models as shown in Table 3, where Ri is the matter-element of nine fault types, I= {I,, 12, I,, r4, Is, I,, r,, 18, 1,) is the Fault set, r, is an i-th fault type, and C= {C,, Cz, C,] is a characteristic set in which C, + C2H2/C2H4, C2- CH4/H2, and C,-C2H&H6. The value range, classical domains of every characteristic set are shown in Tables I and 2.

Table 3 Fault matter-elements of dissolved gas

Step 3: Input a set of field test data and use the supervised learning algorithm in Section 3.2 to train the diagnosis E". Step 4: Input the data of the tested transformer. Step 5: Employ the trained ENN to diagnose the tested transformer. Step 6: Go back to step 4 for the next transformer when the diagnosis of one has been completed, until all have been done.

5 Testing resutts and discussion

To demonstrate the effectiveness of the proposed ENN diagnosis method, 40 sets of field DGA data from power

Fault no. 1 2 3

Fault no. 4 5 6

Fault no. 7 8 9

Step 2: Set up the diagnosis E m . According to matter- element models, the structure of the ENN for transformer fault diagnosis includes nine nodes in the output layer and three nodes in the input layer.

transformers in China [8-9], Australia [IO], and Taiwan have been tested. The detailed gas data and diagnosis results are shown in Table 4, where the AFN expresses the actual fault type number in the fields test; the IEC and ENN are

Table 4: Tested gas data of transformers and diagnosis results with different method

no H I CHA C,Hc CpHa C,H, AFN ENN IEC

1 23.0 22.0 8.0 2 16.0 7.0 3.0 3 109.0 49.0 89.0 4 56.0 61.0 75.0

5 22.0 24.0 32.0 6 11.0 43.0 8.0 7 23.0 14151.0 5332.0 8 20.0 18.0 4.0 9 83.0 17.0 5.0

10 32.4 5.5 1.4 11 32.0 26.0 32.0 12 172.9 334.0 172.0 13 127.0 107.0 11.0 14 27.0 90.0 42.0 15 980.0 73.0 58.0

IEE Pm.-Gmer Tram. Dirrrib, V d 150, No. 6, "ember 2003

3.0 1.0

61.0 32.0

6.0 4.0

23264.0 3.0 5.0

12.6 3.0

812.5 154.0 63.0 12.0

0. 1 0. 1

345.0 8 31.0 9

2.0 9 0. 3 0. 5 0. 1 0. 1

13.2 9 0. 1

37.7 5 224.0 9

0.2 4 0. 6

1 1

8 9

9 3 5

1 1

9 1 5 9 4 6

1

1

N N

N 3 5

1 1

9 1

5 9 4 1

-

683

(continued)

"0 H, CH, C,H, C,Hn C,Hz AFN ENN IEC

16 60.0 51.0 245.0 11.0 0. 1 1 1

17 12.0 11.0 17.0 1 .o 0. 1 1 1

18 144.0 1672.0 394.0 669.0 0. 4 4 4

19 191.0 1359.0 166.0 1494.0 2.0 5 5 5

20 56.0 286.0 96.0 928.0 7.0 5 5 5

21 54.0 8631.0 3815.0 9510.0 0. 4 4 4

22 19.0 15.0 6.0 3.0 0. 1 1 1

23 62.0 130.0 47.0 2.0 0. 3 3 3 24 23.0 592.0 75.0 275.0 0. 5 5 5 25 10.0 52.0 10.0 5.0 0. 3 3 3 26 650.0 53.0 34.0 20.0 0. 6 6 1

27 160.0 130.0 33.0 96.0 0. 2 2 2

28 95.0 110.0 160.0 50.0 0.0 3 3 3

29 14.7 3.8 10.5 2.7 0.2 1 1 1

30 345.0 112.3 27.5 51.5 58.5 8 8 N

31 181.0 262.0 41.0 28.0 0. 3 3 3

32 565.0 53.0 34.0 47.0 0 8 8 N

33 200.0 48.0 14.0 117.0 131.0 9 9 9

34 78.0 161.0 86.0 353.0 10.0 5 5 5

35 300.0 490.0 180.0 3W.O 95.0 4 4 4

36 220.0 340.0 42.0 480.0 14.0 5 5 5

37 9.0 3.0 2.0 2.0 0. 1 1 1

38 170.0 320.0 53.0 520.0 . 3.2 5 5 5

39 60.0 40.0 6.9 110.0 70.0 9 9 N

40 220.5 698.3 275.1 738.0 1.1 4 4 4

the diagnosis results of the IEC method and the proposed ENN diagnosis method, respectively. In Table 4, numbers 3:4,5,30,32 and 39 have no matching codes for diagnosis by the IEC method, but the results of the proposed method show excellent agreement with actual faults in the transformers.

To prove the learning efficiency of the proposed E", Fig. 5 shows the learning curves with different training pattems. It is very clear that the training time of the proposed ENN is quite economical: about four epochs for 20 training patterns and about five epochs for 40 training patterns, respectively. Table 5 shows the clustering results with different neural networks, in which we use 20 samples to learn and 20 samples for testing. It shows that the proposed ENN has a shorter learning time than the multiplayer perceptrons (MLP), and that the accuracy rates are quite high about 100% and 96% for training and testing pattems, respectively. Conversely, the accuracy of both the two layer (3, 9) and the three layer (3-7-9) MLP are only 40% and 50% in the testing patterns.

N -20 - - Np=40 P-

no. 01 epmhs

Fig. 5

684

Leaming cumes of the proposed ENN

Table 5: Error rates with different neural networks

Neural Structure Learning Total mistake rate network epochs model

Training Testing pattern pattern

MLP 3-9 40,000 0.15 0.6

MLP 3-7-9 40,000 0. 0.5

ENN 3-9 4. 0. 0.04

To test the diagnosis performance of the proposed method, diagnosis accuracy with different noise percentages is shown in Table6. The sources of error include measurement, instruments incorrect human operation etc., which may give rise to uncertainties in the sample data. In order to take into account errors and uncertainties, 40 sets of the testing data were created by adding f5% to f20% random uniformdistributed samples to test the robustness of different diagnosis methods. As observed from Table 6, the proposed ENN method has the significantly higher diagnosis accuracy of 100% in both 0% and 5% noise. Moreover, the proposed method shows good tolerance to add errors, and has incidence diagnosis accuracy as high as 77.5% with an extreme error incidence of *20%. In comparison, the diagnosis accuracy of the IEC method is only 80% and 62.5%. After 40,000 learning epochs, the diagnosis accuracy of three layer MLP is 97.5% and 77.5%, respectively.

IEE Proc-Genmr Trmm DinNb, Yo/. 180, No, 6, NoLember ZW3

Table 6 Diagnosis performances of different methods with different percents of ermm added

Noise Proposed MLP (3-7-91 MLP (591 IEC codes Percen- ENN tage (%I

* 0% 100% 97.5% 80% 80% ~ +5% 100% 92.5% 75% 77.5% * 10% 82.5% 90% 75% 70% ?15% 77.5% 82.5% 72.5% 65% *20% 17.5% 77.5% 70% 62.5%

6 Conclusions

This paper presents an innovative extension neural network (E") for power transformer diagnosis. Compared with other traditional neural networks, the main advantage of the proposed ENN is that training time is quite economical and mapping capability is high. In addition, the proposed ENN can quickly and reliably learn to categorise input patterns and permit adaptive processing if significant new information. From the tested results, the proposed ENN- based diagnosis method has a significantly high degree of diagnosis accuracy and shows good tolerance to errors added. This paper is the first application of ENN to power systems, and is merited as ENN deserves serious considera- tion in this field. This paper may be useful in promoting further investigation.

7 Acknowledgments

We gratefully acknowledge the support of the National. Science Council, Taiwan, ROC, for financial support under grant NSC-9 I -2213-E-167-014.

8

1

2

3

A

9

15

16

17

References

Rogers, R.R.: 'IEEE and IEC codes to interpret faults in transformers, using gas in oil analysis', IEEE T r m . Eleetr, I m l . . 1978. 13, (9, pp. 149-354 IEC Publication 599 'Interpretation of the analysis of gases in transformers and other oil-filled electlical equipment in service', (IEC

685


Recommended