Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
MODELING PHOTOVOLTAIC ARRAYS AND COMPARING ANN TOPOLOGIESFOR MAXIMUM POWER POINT TRACKING
Chrystian Lenon Remes∗, Sergio Vidal Garcia Olveira∗, Yales Romulo De Novaes∗
∗Universidade do Estado de Santa Catarina - UDESCR. Paulo Malschitzy S/N, Campus Universitario
Joinville, Santa Catarina, Brazil
Emails: [email protected], [email protected], [email protected]
Abstract— For Photovoltaic Arrays (PV), the Maximum Power Point Tracking (MPPT) deserves a specialattention, ensuring that the maximum power can be extracted for any giving environmental condition. In thiscontext, Artificial Neural Networks (ANN) are a good solution to perform the MPPT. In this paper, a comparativebetween different ANN topologies can be seen, where the number of epochs for convergence, time for algorithmexecution and mean square error are analyzed through different numbers of neurons and training samples. Beyondthe ANN simulations, the parameters for the PV used were obtained using Genetic Algorithms (GA).
Keywords— Photovoltaic Arrays, Artificial Neural Networks, Genetic Algorithms, Maximum Power PointTracking
Resumo— Em Paineis Fotovoltaicos (PV) o Rastreamento do Maximo Ponto de Potencia (MPPT) mereceuma atencao especial, garantindo que a maxima potencia possa ser extraıda para uma dada condicao do ambiente.Neste sentido, as Redes Neurais Artificiais (ANN) sao uma boa opcao para a realizacao do MPPT. Neste trabalho,um comparativo entre diferentes topologias de ANN pode ser visto, onde o numero de epocas para convergencia,tempo de execucao do algoritmo e o erro medio quadratico sao analisados atraves da utilizacao de diferentesnumeros de neuronios e de amostras de treinamento. Alem das simulacoes da ANN, os parametros do PVutilizados foram obtidos atraves de Algoritmos Geneticos (GA).
Palavras-chave— Paineis Fotovoltaicos, Redes Neurais Artificiais, Algoritmos Geneticos, Rastreamento doMaximo Ponto de Potencia
1 Introduction
One of the important highlights on photo-voltaic arrays (PV) is their dependence with cli-matic variables like solar radiation and tempera-ture (Carrijo et al., 2010), making it highly dy-namic, especially for their dependence with solarradiation, that may change in a few seconds by acloud shading, for example. These variables havea large impact on the output power provided bythe photovoltaic array. Once the power on photo-voltaic arrays is dependent on these climatic vari-ables, the controller needs to be fast enough tofind the best operating point for each instanta-neous climatic condition, and then achieve the de-sired power set point. The techniques used to findthe maximum operating point are defined as Max-imum Power Point Tracking (MPPT). The Artifi-cial Neural Networks (ANN) appears as a MPPTmethod, since they can learn the behavior ofpower on Maximum Power Point (MPP), throughtraining, and extrapolate its behavior for any giv-ing climatic input with good accuracy. Also, othervariables, climatic or not, may be added as inputsfor ANN, and the ANN can then achieves a modelthat also relates these other variables to the out-put power, beyond solar radiation and tempera-ture (Esram e Chapman, 2007).
A known problem of using ANN in general ishow to determine the best architecture and topol-ogy for it, and what is the best number of sam-ples to be used on its training. So, this paper
proposes showing the differences between somebuildings of ANN used on photovoltaic arrays,and then, provide a way to choose a better ap-proach for the best ANN configuration to be usedon MPPT. A better ANN configuration brings asbenefits the reduced computational efforts duringthe training phase, keeping output errors small.Also, while defining some of the PV parameters forthis paper, a numerical method was implementedthrough Genetic Algorithms (GA). The parame-ters found were used for all ANN simulations.
In Section 2, a brief discussion about the PVmodeling and the PV parameters will be shown.Also, a quick description of the ANN will be done.In section 3, the ANN results will be discussedfor different ANN configurations, along with theinput-output data classification for ANN trainingand its validation. Finally, Section 4 has the con-clusions about the obtained results.
2 Photovoltaic Array Parameters AndArtificial Neural Network For MPPT
For ANN training, a MATLABr based modelwas used for simulating the equivalent circuit ap-proximated by a single-diode. This is a model thatcan simulate well a given PV with relative sim-plicity, as described in (Villalva et al., 2009). Thecatalog parameters provided by the PV manufac-turer are shown in Table 1, where Vmpp, Impp andPmpp are the voltage, current and power at theMPP, respectively, KTi is a constant that relates
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2300
the current supplied by PV and its temperature,Ns is the number of connected cells in series, Voc
is the open circuit voltage at the STC (StandardTests Conditions), Isc is the short circuit currentat the STC and Eg is the bandgap energy frompolycrystalline silicon cells. The STC establishesthe reference for temperature at 25 oC and for so-lar radiation of 1000W/m2, with an air mass of1.5.
Table 1: Catalog PV Parameters.
Parameter Value UnitVoc 21.9 VIsc 7.65 AVmpp 17.7 VImpp 7.38 APmpp 130.63 WKTi 2.601× 10−3 A/KNs 36 –
2.1 Equivalent PV Circuit
The photovoltaic arrays can be approximatedby an electrical circuit. The most common elec-trical model consists of a current source, a diode,a series resistance and a shunt resistance (Mahdiet al., 2010), as shown in Figure 1. The currentsource represents the energy provided by the solarradiation, and it is represented in Equation 1:
Iph = (Isc + KTi(T − Tr))S
Sr(1)
Where:Iph: Total current provided by solar radiationIsc: Short circuit current provided in STCKTi: Constant - Relates the PV current and tem-peratureT : Instantaneous PV temperatureTr: STC temperatureS: Instantaneous solar radiationSr: STC solar radiation
D Rp
Rs
I
Iph+
V
-
Figure 1: Photovoltaic array equivalent circuit.
This circuit approximation results in Equa-tion 2, which represents the output current of thePV:
I = Iph − Is(e( qAk
1T (V+I.Rs)) − 1)− V − I.Rs
Rp
(2)
Where:Is: Diode saturation currentq: Elementary chargeA: Ideality diode’s constantk: Boltzmann’s constantRs: Series resistanceRp: Shunt resistance
In Equation 2, it is clear that the output cur-rent I depends on its own value, and also on theoutput voltage value. So, the only manner toobtain the PV parameters is through numericalmethods (Villalva et al., 2009).
The diode saturation current Is is a factor de-pendent on the junction temperature and of a Isoconstant, which is the diode saturation current atthe reference temperature, on STC. Iso can be cal-culated if the PV is on STC conditions and in theOpen Circuit operation, which makes the outputcurrent null. Under these conditions, the currentprovided by the photovoltaic effect is the ShortCircuit Current, and Equation 2 can be manipu-lated resulting in Equation 3:
Iso =
Isc −Voc
Rp
e(q
Ak1Tr
Voc) − 1(3)
Once Iso is in hands, and now imposing theMPP at the STC conditions, Equation 2 can bemanipulated once more, now isolating the idealitydiode constant A. The ideality diode constant isgiven by Equation 4:
A =Vmpp − Voc
kTr
q.ln(Isc − Impp − Vmpp
Rp
) (4)
It’s important to note that theses constantsare depending on the Rp value, and that it willbe considered as an infinite value. The section 2.2shows how Rs and Rp values were calculated.
2.2 Genetic Algorithms for PV Parameters Ob-tainment
Genetic Algorithms (GA) are a powerful toolfor solving optimization problems, where functionminimization is included, as seen in (Zagroubaet al., 2010).
GA are inspired in biological behavior, suchas ANN, where a computational population is ran-domly started and imposed to a condition definedby the function. While the population evolves, the
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2301
individual genes approaches the best solution forthe given function (Hadji et al., 2011).
For GA used in this paper, the selection ofindividuals for crossover is done by a stochasticuniform approach. In this selection method, eachindividual has a probability to be chosen based onits value from the fitness function. So, the algo-rithm sweeps in equal steps all the range deter-mined by the entire population and an individualis selected during each step. In this method, an in-dividual can be selected more than once. Two in-dividuals are keep automatically for the next gen-eration by elitism. The crossover method createsnew individuals from a weighted average of theparents. Mutation occurs in some random indi-viduals and points to a random direction.
To achieve the Rp and Rs values, a function tobe minimized was defined using two values of re-sistance as variables. Having the parameters givenin the PV catalog, which are also shown on Table1, Vmpp, Impp, Pmpp and KTi, Equation 2 can beapplied to the MPP point, resulting in Equation5, and multiplying it for the Vmpp, this results inthe power provided by the PV on the MPP.
Using the Pmpp provided by the catalog alongwith Vmpp and Impp, Equation 6 emerges natu-rally as a cost function to be minimized havingthe resistance values as variables.
Impp = Isc − Iso(e(q
Ak1Tr
(Vmpp+Impp.Rs)) − 1)
−Vmpp − Impp.Rs
Rp(5)
J(Rs, Rp) = |Pmpp − Vmpp.Impp(Rs, Rp)| (6)
Where:J : Cost function to be minimized
Before starting the iterations for minimizingthe cost function J(Rs, Rp), the Iso and A con-stants must be defined. As they are values depen-dent of Rp, an approximation must be done, con-sidering the shunt resistance as infinite. This elim-inate the Rp dependence of Iso and A, and givesvalues closer enough to be used on PV model. So,Iso and A are calculated and shown in Table 2. Us-ing these values, the Rs and Rp are finally foundas the solution for the cost function thorugh GA.As the solution changes every running, becausethe algorithm starts with a random population,the values presented in Table 2 were fixed for thefollowing steps.
The values given on Table 2 show that Isoand A approximates very well to the real valuesof the PV, once Iso has a very small value and A isbetween 1 and 2, since those are the desired con-dition for both cases. For the resistances Rs andRp, the values obtained through GA also reflect a
Table 2: Catalog PV Parameters.
Parameter Value UnitRs 4.5132× 10−5 ΩRp 9.3399× 107 ΩIso 2.0463× 10−7 AA 1.3579 –
good PV representation, where Rs is very close tozero and Rp can be considered big enough to beapproximated as infinite.
Now, this paper will use these parametersfrom Table 2 on the next sections to represent thePV for MPPT analysis.
2.3 Artificial Neural Networks
The ANN is a grouping of artificial neurons,inspired in the biological neuron behavior, whichis the simplest structure of an ANN. The artificialneuron is composed by a synaptic weights vector,an adder, a bias and the activation function. Itsrepresentation is seen in Figure 2. The mathemat-ical model for the neuron is shown in Equation 7.
.
.
.
Ʃ
x1
x2
x3
xn
w1
g(u)u y
w2
w3
wn
Figure 2: Artificial neuron and its mathematicalmodel.
u = X ·W − by = g(u)
(7)
Where:X: Neuron Input VectorW: Neuron Weights Vectorb: Biasu: Net Inputg: Activation Functiony: Neuron Output
For the supervised training, a pair input-output is given, where for one input sample, itsdesired output yd is known. Having them, thetraining consists to begin with an initial value ofW for the input X, which results in an outputy. Comparing y with the desired output yd, theoutput error is known, and the process will iteratetrying to minimize this error.
It’s important to know that the variables cho-sen for ANN inputs can influence drastically onANN’s internal structure. Therefore, adding or
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2302
removing input variables will result in a totallydifferent structure of ANN. However, even if twoANNs have different internal structure, not neces-sarily they will have a different input-output be-havior. So, a good practice is to evaluate the ANNbehavior using different sets of inputs vector, andthen analyze which of them brings a better repre-sentation of the real solution.
If compared with other MPPT techniques,ANN can take advantage on reducing the out-put oscillation (Bastos et al., 2012), having fastconvergence for MPP (after training phase) andachieving a real MPP, besides of quickly adjust-ing the MPP even under abrupt changes on cli-matic conditions. On the other hand, ANN hasa high computational cost during the trainingphase, which makes its implementation difficult,beyond of being dependent of the PV used (Esrame Chapman, 2007).
2.4 Multi-Layer Perceptron
The Multi-Layer Perceptron (MLP) is anANN architecture that uses multiple neuron layersbetween the input and the output. An example ofMLP is the one adopted in this paper, representedin Figure 3, where there are one input layer, onehidden layer with n neurons, and the output layerwith just one neuron. These combination of mul-tiple neurons in multiple layers results in a largerflexibility on dealing with complex and nonlinearproblems (Mashaly et al., 1994).
For the MLP training, the back-propagationalgorithm is a common used method. It consistsin two steps, where the first calculates the ANNoutput and the second will adjust the synapticweights for all ANN’s neurons. In the first step,the ANN calculates an output using the existingweight values without changing them. This stepis called forward, and is necessary just to producean output to be compared with the desired out-put, thus generating the error. In the second step,called backward, the ANN will propagate the errorthrough the neurons in the opposite way, startingfrom the output layer to the input layer. Then,the weights are adjusted achieving the error min-imization.
As this method is based on the minimizationof the error function through its derivatives, theactivation function of the neurons must be con-tinuous and derivable. For this reason, activationfunctions like the step or signal functions are notapplicable in this architecture.
3 ANN MPPT Results
Only the MLP architecture was consideredwhile obtaining the results. The reason for thisis that the MLP in general has a good perfor-mance. Also, a good advantage on using ANN for
n1
n2
n3
nn
.
.
.
n1
out
Input layer
Hidden Layer
Output Layer
y
Figure 3: Generic representation of the ANN usedin simulations.
MPPT algorithms is the reduction of the oscilla-tion for the output signal, as described in (Bastoset al., 2012). Variations for the ANN were im-posed on its topology: two activation functionswere used, and the number of hidden layer neu-rons and the quantity of samples were evaluatedto analyze the ANN performance for training. Animportant observation is about how climatic sam-ples were obtained and analyzed.
The climatic database was provided by INPE(Instituto Nacional de Pesquisas Espaciais), witha project known as SONDA. All data were takenin Joinville, located in Brazil. Another importantobservation is that all the simulations executedhave the same initial weights vector. Having theequivalent model, the PV can be simulated for anyclimatic condition, and its curves of current andpower versus voltage are built. Through the builtcurves the values of Maximum Power Point canbe extracted, along with its voltage at this point(Vmpp). Now, the database is complete, beingcomposed by the climatic variables and its MPP,desired as output.
Once all the climatic samples and their re-spective MPP values are available, the next stepis how to classify which input samples could beused on ANN training phase and its validation.Since the climatic variables changes through theyear, a good approach is to use variables takenalong the four seasons of weather.
That being said, the first classification wasdone choosing samples from the first day of onemonth and each season. The months chosen wereJanuary, April, July and October, having thenthe input-output pair for summer, autumn, win-ter and spring seasons for southern hemisphere,respectively. Another classification for ANN sam-ples is separating them by two days per season,one entire month of samples per season, one entireyear (2009) and two entire years (2009 and 2011).This data classification is not the best method toselect data for ANN training, where probably abetter statistical analysis could be used in order
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2303
to achieve a better classification of the samples.However, the intent of this work is to show thatthe ANN is able to be trained even with a non-optimized amount of input-output data, since thisdata has variability enough to represent the be-havior of the system.
To validate ANN results, the values for de-sired output were also built by simulations inMATLABr obtained with climatic values notused previously, with samples from 2012 year.Two activation functions were used: the sigmoidfunction and the hyperbolic tangent function. Thenumber of neurons was fixed in 4, 5, 8 and 9 neu-rons, where this values were determined after someinitial simulations, varying the number of neuronsin the hidden layer in a range of 2 to 10. The bestfour results were taken to compose the final re-sults. To better understand how the data resultswere obtained, a sampling tree with collected datais shown in Figure 4.
MPPTMethod
ANN Architecture
Activation Function
Neurons (Hidden Layer)
Number ofTrainingSamples
DataResults
MLP
Sigm(u)
Tanh(u)
4 neurons
5 neurons
8 neurons
9 neurons
4793
266780
16152299
4793
266780
16152299
4793
266780
16152299
4793
266780
16152299
123456
789
101112
131415161718
192021222324
4793
266780
16152299
4793
266780
16152299
4793
266780
16152299
4793
266780
16152299
252627282930
313233343536
373839404142
434445464748
4 neurons
5 neurons
8 neurons
9 neurons
ANN
Figure 4: Sampling tree for building ANN results.
To analyze the ANN methods, the data re-sults were composed by three factors: the finalmse (mean square error) obtained on validation
phase, the number of epochs“k”taken by the ANNto converge into a solution (the tolerance was fixedin 10−3 for the mse on the training phase) and thetime for algorithm convergence. The results forboth functions varying the number of neurons andthe number of training samples can be observed inFigures 5, 6, 7 and 8. Figures 5 and 6 reflect thebehavior for the number of epochs, while Figures7 and 8 show the results for the time for executingthe algorithm.
The first observation is that the hyperbolictangent presents better values than the sigmoidfunctions, in terms of number of epochs for con-vergence and time for algorithm execution, inde-pendent of the number of neurons or the numberof training samples. Another important tendencyis that in general the time for algorithm execu-tion and the number of epochs for convergencedecreases as the number of training samples in-creases. The behavior for the neurons quantityalso can be evaluated. As the number of neuronsincreases, initially there is a reduction on time foralgorithm execution. But after, a small incrementis observed.
45
67
89 0
5001000
15002000
2500
0
1
2
3
4
5
6
x 104
Training Samples
Sigmoid Function
Number of Neurons
Epo
chs
Figure 5: Number of training epochs for sigmoidfunction.
45
67
89 0
5001000
15002000
2500
0
0.5
1
1.5
2
2.5
x 104
Training Samples
Hyperbolic Tangent Function
Number of Neurons
Epo
chs
Figure 6: Number of training epochs for hyper-bolic tangent function.
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2304
45
67
89 0
5001000
15002000
2500
0
100
200
300
400
500
Training Samples
Sigmoid Function
Number of Neurons
Alg
orith
m E
xecu
tion
Tim
e(s)
Figure 7: Time to execute the training for thesigmoid function.
45
67
89 0
5001000
15002000
2500
0
20
40
60
80
100
120
Training Samples
Hyperbolic Tangent Function
Number of Neurons
Alg
orith
m E
xecu
tion
Tim
e(s)
Figure 8: Time to execute the training for thehyperbolic tangent function.
At last, Figure 9 and Figure 10 are represent-ing the behavior for the average mean square er-ror (amse). The mean square error is the sumof all square error values, for each output sam-ple, divided by the number of samples, as shownin Equation 8. As the values of mse don’t varysubstantially with the neuron numbers, but onlywith the training samples, was chosen to take theaverage of the mse for number of neurons for eachfunction.
mse =
n∑s=1
(y(s) − y(s)d )2
n(8)
This average of the mse for number of neuronsand each function resulted in the amse, a valuewhich represents the mse dependent only with thenumber of samples, and almost constant for anynumber of neurons. The amse clearly increaseswith the number of samples used for training, forboth functions. Another interesting fact is thatthe total mean of mse for each function is biggerfor hyperbolic tangent function than the sigmoidfunction, but nevertheless the difference betweenthe amse for the two functions is sufficiently small.
0 500 1000 1500 2000 25000.02
0.025
0.03
0.035
0.04
0.045
0.05
0.055
0.06
0.065
0.07Sigmoid Function amse
Training Samples
Ave
rage
Mea
n S
quar
e E
rror
Figure 9: Average Mean Square Error for sigmoidfunction.
0 500 1000 1500 2000 25000.02
0.025
0.03
0.035
0.04
0.045
0.05
0.055
0.06
0.065
0.07Hyperbolic Tangent Function amse
Training Samples
Ave
rage
Mea
n S
quar
e E
rror
Figure 10: Average mean square error for hyper-bolic tangent function.
4 Conclusions
In this work, the comparison between differ-ent ANN topologies was shown, along the param-eters obtainment for the PV. It could be seen thatthe number of epochs on training phase decreaseswhen increasing the number of training samples.The time for training algorithm is also smaller asthe number of samples increases, but after a givenvalue, the cost-benefit for using bigger samplesquantity decreases.
Higher number of neurons can also increasesthe execution time of the algorithm. But us-ing a small number of neurons is not recom-mended, because the algorithm may not achievesthe convergence, or may take a longer time to con-verge. Values of mse increases as the number oftraining samples increases, and this phenomena isknown as overtraining, as described in (Balzani eReatti, 2005), and observed on the results. Thehyperbolic tangent function has a superior com-putational performance, but it also has a slightlyincreased error than the sigmoid function. But asseen in results, the smaller computational effortscompensates this very small difference on ANN
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2305
mse for PVs.
The techniques shown in this paper could beused to recognize the PV parameters and extractthe maximum power for a given PV. It’s importantto note that, having one system to find the PVparameters and another one to realize the MPPT,since the PV parameters may change with time,this system could adapt over the years, resultingin a good advantage of this model, especially whenthe system is used in remote places with difficul-ties of maintenance.
Acknowledgement
Special thanks for INPE (Instituto Nacionalde Pesquisas Espaciais) for all provided climaticdata and for CAPES (Coordenacao de Aperfeicoa-mento de Pessoal de Nıvel Superior).
References
Balzani, M. e Reatti, A. (2005). Neural NetworkBased Model of a PV Array For The Opti-mum Performance Of PV System, Researchin Microelectronics and Electronics.
Bastos, R. F., Mocambique, N. E. M., Machado,R. Q. e Aguiar, C. R. (2012). Rede NeuralArtificial Aplicada Na Busca Do Ponto DeMaxima Potencia Em Paineis Fotovoltaicos,XIX CBA.
Carrijo, D. R., Ferreira, R. S., Guimaraes, S. C.e Camacho, J. R. (2010). Uma Proposta DeTecnica De Rastreamento Do Ponto De Max-ima Potencia De Um Painel Fotovoltaico,XVIII CBA.
Esram, T. e Chapman, P. L. (2007). Comparisonof Photovoltaic Array Maximum Power PointTracking Techniques, IEEE Transactions OnEnergy Conversion.
Hadji, S., Gaubert, J.-P. e Krim, F. (2011). Ge-netic algorithms for maximum power pointtracking in photovoltaic systems, EuropeanConference on Power Electronics and Appli-cations.
Mahdi, A. J., Tang, W. H. e Wu, Q. H. (2010).Improvement of a MPPT Algorithm for PVSystems and Its Experimental Validation, In-ternational Conference on Renewable Ener-gies and Power Quality.
Mashaly, H. M., Sharaf, A. M., Mansour, M.e El-Sattar, A. A. (1994). A PhotovoltaicMaximum Power Tracking Using Neural Net-works, IEEE CCA.
Villalva, M. G., Gazoli, J. R. e Filho, E. R.(2009). Comprehensive Approach to Model-ing and Simulation of Photovoltaic Arrays,IEEE Transactions On Power Electronics.
Zagrouba, M., Sellami, A., Bouaıcha, M. e Ksouri,M. (2010). Identification of PV solar cellsand modules parameters using the genetic al-gorithms: Application to maximum power ex-traction, Solar Energy.
Anais do XX Congresso Brasileiro de Automática Belo Horizonte, MG, 20 a 24 de Setembro de 2014
2306