6
Application Research Based on Artificial Neural Network(ANN) to Estimate the Weight of Main Material for Transformers Amit Kr. Yadav,Abdul Azeem,Akhilesh Singh Electrical Engineering Department National Institute Of Technology Hamirpur, H.P. India e-mail: [email protected] O.P. Rahi Assistant Professor, Electrical Engineering Department National Institute Of Technology Hamirpur, H.P. India e-mail: [email protected] Abstract—Transformer is one of the vital components in electrical network which play important role in the power system. The continuous performance of transformers is necessary for retaining the network reliability, forecasting its costs for manufacturer and industrial companies. The major amount of transformer costs are related to its raw materials, so the cost estimation process of transformers are based on amount of used raw material. This paper presents a new method to estimate the weight of main materials for transformers. The method is based on Multilayer Perceptron Neural Network (MPNN) with sigmoid transfer function. The Levenberg-Marquard (LM) algorithm is used to adjust the parameters of MPNN. The required training data are obtained from transformer company. Keywords-Artificial Neural Network (ANN),Levenberg Marquard(LM)algorithm,estimatingweight,design,powersystem, transformer. I. INTRODUCTION The most important components in electrical network are transformers which have an important role in electrification. The continuous performance of transformers is necessary for retaining the network reliability, forecasting its costs for manufacturer and industrial companies. Since the major amount of transformers costs is related to its raw materials, so having amount of used raw material in transformers is an important task [1]. The aim of the transformer design is to completely obtain the dimensions of all the parts of the transformer based on the desired characteristics, available standards, and access to lower cost, lower weight, lower size, and better performance [2-3]. Various methods have been studied and some techniques have been used. Artificial Neural Network is one of methods that mostly have been used in the recent years, in this field. Transformer insulation aging diagnoses, the time left from the life of transformers oil, transformers protection and selection of winding material in order to reduce the cost, are few topics that have been performed [4-8]. In this paper Artificial Neural Network based method have been used to estimate the weight of main materials for transformer (weight of copper, weight of iron and weight of oil). These are the main components which have been used for designing and cost estimation process. In following artificial neural networks with Levenberg- Marquard back propagation algorithm have been used to estimate the main material’s weight of transformers. The extracted data from transformer manufacturing company has been used to train the ANN and the best parameters for this network have been presented graphically. Finally result given by trained neural network have been compared with actual manufactured transformer prove the accuracy of presented method to estimate the amount of raw materials, used in this transformer manufacturing company (in various installation temperature and altitude, various short circuit impedance and various volt per turn) II. ARTIFICIAL NEURAL NETWORK Neural networks are a relatively new artificial intelligence technique. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase. The learning procedure tries is to find a set of connections w that gives a mapping that fits the training set well. Furthermore, neural networks can be viewed as highly nonlinear functions with the basic the form (, ) Fxw y Where x is the input vector presented to the network, w are the weights of the network, and y is the corresponding output vector approximated or predicted by the network. The weight vector w is commonly ordered first by layer, then by neurons, and finally by the weights of each neuron plus its bias. This view of network as an parameterized function will be the basis for applying standard function optimization methods to solve the problem of neural network training. A. ANN Structure A neural network is determined by its architecture, training method and exciting function. Its architecture determines the pattern of connections among neurons. Network training changes the values of weights and biases (network

teste

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: teste

Application Research Based on Artificial Neural Network(ANN) to Estimate the Weight of Main Material for Transformers

Amit Kr. Yadav,Abdul Azeem,Akhilesh SinghElectrical Engineering DepartmentNational Institute Of Technology

Hamirpur, H.P. Indiae-mail: [email protected]

O.P. RahiAssistant Professor, Electrical Engineering Department

National Institute Of TechnologyHamirpur, H.P. India

e-mail: [email protected]

Abstract—Transformer is one of the vital components in electrical network which play important role in the power system. The continuous performance of transformers isnecessary for retaining the network reliability, forecasting its costs for manufacturer and industrial companies. The major amount of transformer costs are related to its raw materials, so the cost estimation process of transformers are based on amount of used raw material. This paper presents a new method to estimate the weight of main materials for transformers. The method is based on Multilayer Perceptron Neural Network (MPNN) with sigmoid transfer function. The Levenberg-Marquard (LM) algorithm is used to adjust the parameters of MPNN. The required training data are obtained from transformer company.

Keywords-Artificial Neural Network (ANN),Levenberg Marquard(LM)algorithm,estimatingweight,design,powersystem, transformer.

I. INTRODUCTION

The most important components in electrical network are transformers which have an important role in electrification.The continuous performance of transformers is necessary for retaining the network reliability, forecasting its costs for manufacturer and industrial companies. Since the major amount of transformers costs is related to its raw materials, so having amount of used raw material in transformers is an important task [1]. The aim of the transformer design is to completely obtain the dimensions of all the parts of the transformer based on the desired characteristics, available standards, and access to lower cost, lower weight, lower size, and better performance [2-3]. Various methods have been studied and some techniques have been used. Artificial Neural Network is one of methods that mostly have been used in the recent years, in this field. Transformerinsulation aging diagnoses, the time left from the life of transformers oil, transformers protection and selection of winding material in order to reduce the cost, are few topics that have been performed [4-8]. In this paper Artificial Neural Network based method have been used to estimate the weight of main materials for transformer (weight of copper, weight of iron and weight of

oil). These are the main components which have been used for designing and cost estimation process.In following artificial neural networks with Levenberg-

Marquard back propagation algorithm have been used to estimate the main material’s weight of transformers. Theextracted data from transformer manufacturing company has been used to train the ANN and the best parameters for this network have been presented graphically. Finally result given by trained neural network have been compared with actual manufactured transformer prove the accuracy of presented method to estimate the amount of raw materials, used in this transformer manufacturing company (in various installation temperature and altitude, various short circuit impedance and various volt per turn)

II. ARTIFICIAL NEURAL NETWORK

Neural networks are a relatively new artificial intelligence technique. In most cases an ANN is an adaptive system that changes its structure based on external or internalinformation that flows through the network during the learning phase. The learning procedure tries is to find a set of connections w that gives a mapping that fits the training set well. Furthermore, neural networks can be viewed as highly nonlinear functions with the basic the form

( , )F x w y

Where x is the input vector presented to the network, w are the weights of the network, and y is the corresponding output vector approximated or predicted by the network. The weight vector w is commonly ordered first by layer, then by neurons, and finally by the weights of each neuron plus its bias. This view of network as an parameterized function will be the basis for applying standard function optimization methods to solve the problem of neural network training.

A. ANN StructureA neural network is determined by its architecture, training

method and exciting function. Its architecture determines the pattern of connections among neurons. Network training changes the values of weights and biases (network

Page 2: teste

parameters) in each step in order to minimize the mean square of output error.Multi-Layer Perceptron (MLP) has been used in load

forecasting, nonlinear control, system identification and pattern recognition [9], thus in this paper multi-layer perceptron network (with four inputs, three outputs and a hidden layer) with Levenberg-Marquardt training algorithm have been used.In general, on function approximation problems, for

network that contain up to a few hundred weights, the Levenberg-Marquardt algorithm have the fastest convergence. This advantage is especially noticeable if very accurate training is required. In many cases, trainlm is used to obtain lower mean square error than any other algorithms tested. As the number of weights in the network increases, the advantage of trainlm decreases. In addition trainlm performance is relatively poor on pattern recognition problems. The storage requirements of trainlm are larger than the other algorithm tested.

Figure 1: Artificial Neural Network

B. Input and Outputs of ANN

A neural network is a data modeling tool that is capable to represent complex input/output relationships. ANN typically consists of a set of processing elements called neurons that interact by sending signals to one another along weighted connections. The required data are the data which have been accumulated by Transformer Company. In last recent four years, are used for estimating the iron, copper and oil weights of transformers and consequently transformer costs are estimated by proposed method (in various installation height and temperature with different short-circuit impedance and volt per turn). The schematic of the presented method can be shown by Figure 2.

Figure 2: Schematic of Inputs and Outputs

C. Training of ANN

The major justification for the use of ANNs is their ability to learn relationships in complex data sets that may not be easily perceived by engineers. An ANN performs this function as a result of training that is a process of repetitively presenting a set of training data (typically a representative subset of the complete set of data available) to the network and adjusting the weights so that each input data set produces the desired output.Unsupervised and supervised learning process can be used to adjust the weights in an ANN. Supervised learning process requires both input/output pairs to train the network but supervised learning process requires only input pairs to train the network.Unsupervised learning can be characterized as a fast, but potentially inaccurate, method of adjusting the weights. On the other hand, supervised learning typically requires longer learning times and can be more accurate. There is no way to tell beforehand which learning method will work best for a given application. For this reason, we concentrate on the very popular supervised learning approach based on the backpropagation training algorithm, which has been shown to produce good results for a large number of different problems.The back propagation training algorithm is a method of iteratively adjusting the neural network weights until the desired accuracy level is achieved. It is based on a gradient-search optimization method applied to an error function. Typical error functions include the mean square error shown in (1), where N is the total number of input/output pairs (which can be vector quantities) used for training:

2, ,

1

1[ ]

N

forecast i actual ii

mse OUT OUTN

(1)

Where ,forecast iOUT and ,actual iOUT are the output

forecast by the neural network and the actual (desired) output, respectively, of the ith training example. The set of training examples (input/output pairs) defines the training set or learning set. For best results, the training set should

Page 3: teste

adequately represent all expected variations in the complete set of data.A recursive algorithm for adjusting the weights can be developed, such that the error defined by (1) is minimized. The equations (2) and (3) are recursive training equations based on the generalized delta rule and the corresponding algorithm is called gradient descent back propagation.

, ,( 1) . . . ( )pj qk qk pj pj qkw n lr OUT m w n (2)

, , ,( 1) ( ) ( 1)pj qk pj qk pj qkw n w n w n (3)

Where :n : the no. of current iteration of the training algorithm

, ( )pj qkw n : the value of weight that connects the neuron p

Of layer j with the neuron q of layer k during Iteration n.

, ( )pj qkw n : the variation in the value of weight , ( )pj qkw n during the iteration n.

qk : the value of (delta coefficient) for the neuron

q of layer k.

pjOUT :the output for the neuron p of layer j.

lr : the learning rate.m :the momentum.

The value of d is calculated differently depending on the specific location of the weight under consideration (4) is the formula for calculating d for any weight connected from a hidden layer neuron to an output layer neuron:

2. .(1 ).( )qk qk qk actualqk qkOUT OUT OUT OUT

N (4)

where layer k is the output layer, actualqkOUT is the actual

(desired) output of any neuron q of the output layer k , andN is the number of training examples of the training set. The values in (4) are known from the training set. The calculated output of the network is compared to the actual value to generate an error signal. The error signal is propagated back through the neural network to adjust the weights, as shown in (2) and (3). For neurons in any other than an output layer, however, an error value is not directly obtainable because no desired output value is given for these internal neurons as a part of the training set. The error values for any neurons other than the output neurons are calculated as weighted sums of the output layer errors:

,1

.(1 ). .Q

pj pj pj qk pj qkq

OUT OUT w

(5)

where Q is the number of neurons of the output layer.The coefficient lr in (2) is called learning rate and directly controls how much the calculated error values are allowed

to change the weights. The learning rate is typically selected between 0.01 and 1.0. The coefficient in m (2) is called momentum and allows the weight updates at one iteration to utilize information from previous error values. The momentum term helps avoid settling into a local minimum and is selected between 0.01 and 1.0. The recursive training algorithm (set n= n+1) is executed until the network satisfactorily predicts the output values. Common stopping criteria for the training algorithm involve monitoring either the mean square error or the maximum error or both and stopping when the value is less than a specified tolerance. The selected tolerance is very problem dependent and may or may not be actually achievable. There is no mathematical proof that the back propagation training algorithm will ever converge within a given tolerance. The only guarantee is that any changes of the weights will not increase the total error. Note that the inclusion of the momentum term may allow the error as defined in (1) to temporarily increase if the optimization process is moving away from the local minimum.

III. LEVENBERG MARQUARD FORMULATIONFOR TRANSFORMER

The LM algorithm has been used in function approximation. Basically it consists in solving the equation:

( )t tJ J I J E (6)

Where J is the Jacobian matrix for the system, λ is the Levenberg's damping factor, δ is the weight update vector that we want to find and E is the error vector containing the output errors for each input vector used on training the network. The δ tell us by how much we should change our network weights to achieve a (possibly) better solution. The JtJ matrix can also be known as the approximated Hessian. The λ damping factor is adjusted at each iteration, and guides the optimization process. If reduction of E is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, λ can be increased, giving a step closer to the gradient descent direction.

Algorithm:-1. Compute the Jacobian (by using finite differences

or the chain rule) 2. Compute the error gradient

( i) g = JtE 3. Approximate the Hessian using the cross product

Jacobian . ( i )H = JtJ

4. Solve (H + λI)δ = g to find δ .5. Update the network weights w using δ6. Recalculate the sum of squared errors using the

updated weights 7. If the sum of squared errors has not decreased,

Page 4: teste

(i)Discard the new weights, increase λusing different values and go to step 4.

8. Else decrease λ using different values and stop.

IV. SIMULATION

For network learning, some input vectors (P) and some output vectors (T) are needed. By considering extracted data the belong type of 63/20kV transformer from transformer manufacturing during last 4 years, simulating has been performed in the following case. In Table II and III, 24 inputs vector and 24 outputs vector that are used for network learning.

TABLE II INPUTS FOR TRASFORMER 63/20 KV

Inputs Short circuitImpedance

percent

Installationheight

Volt per turn

Environment temperature

P1 8 1000 87.719 50P2 10 1000 76.336 55P3 10 1000 84.034 50P4 10 1000 60.79 45P5 10 1000 68.027 40P6 12 1500 68.027 40P7 12 2200 54.795 50P8 12.5 1364 99.502 40P9 12.5 1500 54.201 40

P10 12.5 1500 67.34 40P11 12.5 1500 76.923 50P12 12.5 1500 106.952 45P13 12.5 1700 97.087 45P14 12.5 1900 49.948 39P15 13 2000 66.67 50P16 13.5 1000 79.94 47P17 13.5 1500 75.76 45P18 13.5 1500 75.785 40P19 13.5 1700 37.88 40P20 13.5 1700 47.17 55P21 13.5 1700 66.007 42P22 13.5 2000 46.62 50P23 13.7 1500 75.753 55P24 14 1000 121.212 45

TABLE III OUTPUTS FOR TRANSFORMER 63/20 KV

Outputs Weight of iron Weight of Oil Weight of Copper

T1 31200 13500 7700T2 25600 11500 7770T3 27000 11400 7094T4 22100 8500 7000T5 22700 9900 6894T6 24000 10600 8000T7 15100 7500 4124T8 38000 18000 11720T9 15250 7700 8667T10 22400 10300 6891T11 30160 13700 10000T12 45470 19000 9765T13 33200 14100 6600T14 17450 8800 9700T15 25562 11820 8950T16 28750 11850 9500

T17 27500 11110 7387T18 25600 10630 2780T19 9500 7250 6300T20 16700 8900 8479T21 27000 12250 5199T22 15550 8550 7550T23 25400 12250 17450T24 53100 2550 8479

A two layer feed-forward network with sigmoid hidden neurons and linear output neurons has been used. The network has been trained with Levenberg-Marquard backpropagation algorithm. The number of neurons in hidden layer is twenty.

V. RESULT AND DISCUSSION

Figure3: Mean Square ErrorThe performance curve is shown in Figure 1. In this figure mean squared error have become small by increasing the number of epoch. The test set error and the validation set error has similar characteristics and no significant over fitting has occurred by iteration 6(where best validation performance has occurred).

Page 5: teste

Figure4: Prediction of weight of main material of transformer during training analysis.

The output has tracked the targets very well for training, in estimation of weight of main material in transformer. The value of regression is one which indicates a close correlation between outputs and targets.

Figure5: Prediction of weight of main material of transformer during test analysis

Figure6: Regression plot of weight of main material of transformerThe output tracks the targets very well for training, testing, and validation, and the R-value is over 0.95 for the total response.

CONCLUSIONS

The major amount of transformers costs is related to its raw materials, so having the amount of used raw material in various conditions in transformers has been used in costs analysis process. This paper presented a new method to estimate the weight of main material (weight of copper, weight of iron and weight of oil) for 63/20kV transformers. The method is based on two layer feed-forward network with sigmoid transfer function in hidden layer and linear transfer function in output neuron. The Levenberg-Marquard (LM) algorithm is used to adjust the parameters of MPNN. The required training data for MPNN are the obtained information from the transformers manufacturing company during last 4 years. The advantage of using ANN in the design and optimization is that ANN is required to be trained only once. After the completion of training, the ANN gives the transformers weight without any iterative process. Thus, this model can be used confidently for the design, cost estimating and development of transformers. Developed model has very fast, reliable and robust structure.

REFERENCES

[1] P. S. Georgilakis, "Recursive genetic algorithm-finite element method technique for the solution of transformer manufacturing cost minimization problem”, IET Electr. Power Appl., 2009, Vol. 3, Iss. 6, pp. 514–519.

Page 6: teste

[2] Pavlos S. Georgilakis, Marina A. Tsili and Athanassios T. Souflaris, ”A Heuristic Solution to the Transformer Manufacturing Cost Optimization Problem”, JAPMED’4-4th Japanese-Mediterranean Workshop on Applied Electromagnetic Engineering for Magnetic, Superconducting and Nano Materials, Poster Session, Paper 103_PS_1 nology (e-JST), September 2005, pp. 83-84.

[3] Manish Kumar Srivastava,”An innovative method for design of distribution transformer”, e-Journal of Science & Technology (e-JST), April 2009, pp. 49-54.

[4] Geromel, Luiz H., Souza, Carlos R., ”The application of intelligent systems in power transformer design”, IEEE Conference, 2002, pp. 1504-1509.

[5] Yang Qiping, Xue Wude, LanZida, “Transformer Insulation Aging Diagnosis and Service Life Evaluation”, Transformer [J], No.2, Vol. 41,

[6] Tetsuro Matsui, Yasuo Nakahara, Kazuo Nishiyama, Noboru Urabe and Masayoshi Itoh, ”Development of Remaining Life Assessment for Oilimmersed Transformer Using Structured Neural Networks”, I CROSSICE International Joint Conference, August 2009, pp. 1855-1858.

[7] M. R. Zaman and M. A. Rahman, “Experimental testing of the artificial neural network based protection of power transformers”, IEEE Trans. Power Del., vol. 13, no. 2, pp. 510–517, Apr. 1998.

[8] Eleftherios I. Amoiralis, Pavlos S. Georgilakis and Alkiviadis T. Gioulekas, "An Artificial Neural Network for the Selection of Winding Material in Power Transformers", Springer-Verlag Berlin Heidelberg, 2006, pp. 465-468.

[9] Khaled Shaban, Ayman EL-Hag and Andrei Matveev, ”Predicting Transformers Oil Parameters”, IEEE Electrical Insulation Conference, Montreal, QC, Canada, 31 May - 3 June 2009, pp. 196-199.