4
IEEE TRANSACTIONS ON MAGNETICS, VOL. 50, NO. 2, FEBRUARY 2014 7001504 A New Neural Predictor for ELF Magnetic Field Strength Salvatore Coco 1 , Antonino Laudani 2 , Francesco Riganti Fulginei 2 , and Alessandro Salvini 2 1 DIEEI, University of Catania, Catania I-95125, Italy 2 Department of Engineering, University of Roma Tre, Rome I-00146, Italy A new neural predictor (NP) is presented for the evaluation of the magnetic field distribution at extremely low frequency in indoor or outdoor environments. The NP uses a reduced number of measurements and performs the evaluation of the magnetic field strength in the environment under analysis, by achieving a high degree of immunity to the noise present in the measurements. To achieve good performance, ad hoc strategies have been adopted for the setup and the neural network training. The NP has been tested on a real environment and the obtained results exhibit a close agreement with the measured field values. Index Terms—Artificial neural networks (NNs), magnetic field measurement, magnetic shielding, predictive models. I. I NTRODUCTION I N THE last years, the wide spread use of electric energy has generated increasing interest in studying possible interac- tions between electromagnetic fields and human beings. In par- ticular, attention has focused on fields acting at extremely low frequencies (ELFs), that is, in the range 0–3 kHz [1], to protect the users of electric devices against nonionizing low-frequency magnetic fields. Indeed, even if currently available research has not proven a clear association between electromagnetic field exposures and biological effects, several international standard institutions have proposed guidelines and standards requiring exposure limits [1]–[3]. With the aim to accurately evaluate the ELF magnetic field around real appliances, equivalent source (ES) models and neural networks (NNs) have been widely investigated ([4]–[10] and references within). Unfortunately, although the possibility of computing the magnetic field dis- tribution by means of ESs appears to be very attractive, ES models show some limitations in the characterization of the field strength for complex environment. This is due to the impossibility to find a precise ES model for large environment starting from a reduced number of measurements. In addition, these measurements are also affected by a certain amount of noise, due to the instrumentation and its collocation, which must be extremely precise to be used to solve the inverse problem related to the search of ES model. On the other hand, NNs allow us to achieve a high degree of immunity to the noise; but to overcome the problem of the few available measurements, the neural predictors (NPs) proposed in [6] and [8]–[10], usually require the generation of additional training data by means of analytical or numerical models: thus, the NN setup becomes extremely complicated and the obtained results are not so better than those returned by ES models, i.e., the NN approach is no more so competitive. In this paper, the authors present a new NP for the evaluation of the ELF magnetic field distribution, which does not require additional training data, because of the adoption of advanced Manuscript received June 28, 2013; revised September 4, 2013; accepted September 16, 2013. Date of current version February 21, 2014. Corresponding author: A. Laudani (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TMAG.2013.2283022 setup/training techniques. Indeed, the NP has been imple- mented by following ad hoc strategy (bootstrap/aggregated NNs) to improve the quality of the results and achieve immunity versus noise, which strongly affect the data coming from sensors. Thus, it has been possible to train the NP only by means of a reduced number of measurements taken in correspondence of suitable accessible points distributed into an environment. The validation of the NP has been performed for a real environment and the obtained results have been compared with those coming from models employing magnetic dipoles as ES models, showing that NNs are more convenient for dealing with this kind of problem. II. NP FOR ELF MAGNETIC FIELDS The NP, herein proposed, is based on a set of aggregated feedforward multilayer perceptron NN and does not require any information about the sources. The inputs are simply the coordinates of the prediction points, whereas the outputs are the three components of the magnetic field strength H or the magnetic induction B at those coordinates (see Fig. 1), according to the measured field (H or B) used in the setup of the NP. Since the NP must return predictions by managing a small number of measurements available from sensors its setup is not trivial. Indeed, the problem to overcome is not only the smallness of the data set, but also the noise affecting data. The best observed solution is based on a bootstrap/aggregate approach, which has returned the best tradeoff between accu- racy and cost of the NP setup. This result has been obtained after having investigated different eventual solutions for this twofold problem (i.e., a few data affected by noise): 1) use of bootstrap/aggregated NNs; 2) expansion of training data set by adding white noise to measurements; 3) combination of the two techniques 1) and 2). Let us briefly describe the first two investigated solutions. 1) The bootstrap/aggregated NNs are a cluster of NNs [see Fig. 1(b)] trained in a suitable way [11]. The term bootstrap means that all the NNs share the same whole training set available for the cluster, but it is randomly partitioned/permuted into different training sets, each of which is dedicated to each single NN of the cluster. 0018-9464 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

A New Neural Predictor for ELF Magnetic Field Strength

Embed Size (px)

Citation preview

Page 1: A New Neural Predictor for ELF Magnetic Field Strength

IEEE TRANSACTIONS ON MAGNETICS, VOL. 50, NO. 2, FEBRUARY 2014 7001504

A New Neural Predictor for ELF Magnetic Field StrengthSalvatore Coco1, Antonino Laudani2, Francesco Riganti Fulginei2, and Alessandro Salvini2

1DIEEI, University of Catania, Catania I-95125, Italy2Department of Engineering, University of Roma Tre, Rome I-00146, Italy

A new neural predictor (NP) is presented for the evaluation of the magnetic field distribution at extremely low frequency inindoor or outdoor environments. The NP uses a reduced number of measurements and performs the evaluation of the magneticfield strength in the environment under analysis, by achieving a high degree of immunity to the noise present in the measurements.To achieve good performance, ad hoc strategies have been adopted for the setup and the neural network training. The NP has beentested on a real environment and the obtained results exhibit a close agreement with the measured field values.

Index Terms— Artificial neural networks (NNs), magnetic field measurement, magnetic shielding, predictive models.

I. INTRODUCTION

IN THE last years, the wide spread use of electric energy hasgenerated increasing interest in studying possible interac-

tions between electromagnetic fields and human beings. In par-ticular, attention has focused on fields acting at extremely lowfrequencies (ELFs), that is, in the range 0–3 kHz [1], to protectthe users of electric devices against nonionizing low-frequencymagnetic fields. Indeed, even if currently available research hasnot proven a clear association between electromagnetic fieldexposures and biological effects, several international standardinstitutions have proposed guidelines and standards requiringexposure limits [1]–[3]. With the aim to accurately evaluate theELF magnetic field around real appliances, equivalent source(ES) models and neural networks (NNs) have been widelyinvestigated ([4]–[10] and references within). Unfortunately,although the possibility of computing the magnetic field dis-tribution by means of ESs appears to be very attractive, ESmodels show some limitations in the characterization of thefield strength for complex environment. This is due to theimpossibility to find a precise ES model for large environmentstarting from a reduced number of measurements. In addition,these measurements are also affected by a certain amount ofnoise, due to the instrumentation and its collocation, whichmust be extremely precise to be used to solve the inverseproblem related to the search of ES model. On the otherhand, NNs allow us to achieve a high degree of immunityto the noise; but to overcome the problem of the few availablemeasurements, the neural predictors (NPs) proposed in [6] and[8]–[10], usually require the generation of additional trainingdata by means of analytical or numerical models: thus, the NNsetup becomes extremely complicated and the obtained resultsare not so better than those returned by ES models, i.e., theNN approach is no more so competitive.

In this paper, the authors present a new NP for the evaluationof the ELF magnetic field distribution, which does not requireadditional training data, because of the adoption of advanced

Manuscript received June 28, 2013; revised September 4, 2013;accepted September 16, 2013. Date of current version February 21, 2014.Corresponding author: A. Laudani (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TMAG.2013.2283022

setup/training techniques. Indeed, the NP has been imple-mented by following ad hoc strategy (bootstrap/aggregatedNNs) to improve the quality of the results and achieveimmunity versus noise, which strongly affect the data comingfrom sensors. Thus, it has been possible to train the NPonly by means of a reduced number of measurements takenin correspondence of suitable accessible points distributedinto an environment. The validation of the NP has beenperformed for a real environment and the obtained results havebeen compared with those coming from models employingmagnetic dipoles as ES models, showing that NNs are moreconvenient for dealing with this kind of problem.

II. NP FOR ELF MAGNETIC FIELDS

The NP, herein proposed, is based on a set of aggregatedfeedforward multilayer perceptron NN and does not requireany information about the sources. The inputs are simply thecoordinates of the prediction points, whereas the outputs arethe three components of the magnetic field strength H orthe magnetic induction B at those coordinates (see Fig. 1),according to the measured field (H or B) used in the setup ofthe NP. Since the NP must return predictions by managing asmall number of measurements available from sensors its setupis not trivial. Indeed, the problem to overcome is not only thesmallness of the data set, but also the noise affecting data.The best observed solution is based on a bootstrap/aggregateapproach, which has returned the best tradeoff between accu-racy and cost of the NP setup. This result has been obtainedafter having investigated different eventual solutions for thistwofold problem (i.e., a few data affected by noise):

1) use of bootstrap/aggregated NNs;2) expansion of training data set by adding white noise to

measurements;3) combination of the two techniques 1) and 2).

Let us briefly describe the first two investigated solutions.

1) The bootstrap/aggregated NNs are a cluster of NNs [seeFig. 1(b)] trained in a suitable way [11]. The termbootstrap means that all the NNs share the same wholetraining set available for the cluster, but it is randomlypartitioned/permuted into different training sets, each ofwhich is dedicated to each single NN of the cluster.

0018-9464 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: A New Neural Predictor for ELF Magnetic Field Strength

7001504 IEEE TRANSACTIONS ON MAGNETICS, VOL. 50, NO. 2, FEBRUARY 2014

Fig. 1. NP input/output schematization. (a) Single NN and (b) aggregatedconfiguration of NN observed as best solution for NP.

The term aggregated means that the global output isobtained by linear combination of each single outputof each single NN of the cluster suitable sorted.

2) The expansion of the training set using white additivenoise is a technique used in literature to enhance NNprediction performance [12], [13]. By starting from Koriginal samples x , M × K samples are generated usingM random Gaussian noise vectors Ni (σ

2), i = 1, . . . , Mwith fixed variance σ 2, and then by adding these to theoriginal samples x .

In the following section, the comparison of these differentsolutions will be discussed for predicting the magnetic fieldin a typical indoor environment application.

III. VALIDATION ON EXPERIMENTAL DATA

To test the capabilities of the proposed NP, herein wecompare its performance with those of an ES model. Withthis goal, we address the problem of predicting the ELFmagnetic field distribution in the neighborhood of an electricplant that is close to a small classroom (size 7 m × 7 m)located at the University of Catania. The prediction is madeby starting from a few measurements taken in correspondenceof a regular grid built along two surfaces parallel to the wall,which separates the investigated area and the room containingthe high-voltage electrical equipments. The instrumentationused for measuring was the Narda PMM 8053 with sensorEHP-50C, working in the frequency range 5 Hz–100 kHz.Since preliminary analysis of the spectrum of the magneticfield had shown a dominant component at 50 Hz, onlythis component was acquired and measured. The ranges ofthe Cartesian components of the magnetic induction were0.12–4.75 μT for Bx , 0.21–4.85 μT for By , and 0.20–6.66 μTfor Bz . The whole set of available measurements (exactly180 samples, shown in the following tests as all measurements)has been split in two sets: 1) the training data set (140 sam-ples), used to train NNs and build ES model and 2) the testdata set (40 samples), used to compare the generalizationperformance of NNs or ES models. In all the tests, a singlehidden layer feedforward NN containing NHL neurons in thehidden layer with tansig activation function has been adopted;the training has been performed according to the proceduresdescribed in [14] and [15]. Mean-squared-error (MSE) thresh-old (defined as MSE = (1/N)

∑N1 (Y − Y )2, where Y is a

Fig. 2. MSE on test, training, and whole measurement data sets at varyingnumbers of ESs.

vector of N predictions and Y is the vector of the true values)equal to 10−4 and 1000 epochs have been used as alternativestopping criteria. However, in all performed tests, the trainingof each used NN has been always automatically stopped forachieving the stopping criterion of the fixed maximum numberof epochs, i.e., no test did perform an acceptable MSE value.

A. Magnetic Dipole as ES

Several different ES models have been proposed in theliterature with remarkable results [16]–[18]: our choice hasbeen the use of magnetic dipoles for their flexibility. Moreover,the implementation of the formula used for computing themagnetic field is extremely simple. Indeed, the magneticinduction of a magnetic elementary dipole is given by

�B(P) = μ0

[3 �m · �R

R5�R − �m

R3

]

(1)

where the vector �R goes from the center of the magnetic dipoleto the evaluation point P , whereas �m is the magnetic dipolemoment. The ES model setup is performed by solving aninverse problem consisting in the estimation of the position andthe components of the magnetic moments of each ES. To solvethis kind of inverse problem, the iterative procedure presentedin [19] and [20] has been adopted, by adapting it to be used formagnetic dipoles. This procedure is aimed at minimizing theMSE between the measured magnetic field and the computedES field, by estimating the optimal (minimum) number of ESsto be used (detailed description of the procedure can be foundin [19]). By following this approach, we have been able tofind the ES model for the prediction of the magnetic fieldinside the classroom. The results in terms of MSE on test,training (the one used for the solution of inverse problem), andwhole measurement data sets (all measurements) are shown inFig. 2 versus the number of ESs. As it is possible to note theminimum value for MSE on the test set has been obtainedfor 16 ESs (MSE = 0.115). Using more than 16, instead, thetest-set MSE increases, i.e., the prediction capability of the ESmodel is worsened. In addition, the best MSE value related

Page 3: A New Neural Predictor for ELF Magnetic Field Strength

COCO et al.: NEW NP FOR ELF MAGNETIC FIELD STRENGTH 7001504

TABLE I

MSES ON TEST SET (MINIMUM, MEAN, MEDIAN, AND STANDARD

DEVIATION FOR 30 TRAINED NNS) VERSUS NHL

to the whole data set (MSE = 0.089) has been obtained for35 ESs, i.e., the maximum number of sources that had beenpreviously fixed for the simulation. Moreover, a number equalto 35 magnetic dipoles involves a use and a management of35 × 6 = 210 parameters for the inverse problem solution:that is really a hard task. The implemented procedure requiresabout 24 h for the solution of the inverse problem. However,even using 35 ESs the MSE has returned for the test set a valuehigher than 0.2: this is a further proof about the ineffectivenessof ES models for the prediction of ELF fields inside largeenvironments.

B. Bootstrap/Aggregated NNs Applied to ELF MagneticFields (Test #1)

In this section, we aim to show the steps followedfor the practical implementation of the NP-based on boot-strap/aggregated NNs. The NN test performed is twofold:1) estimation of the effects of the numbers of neurons ofthe hidden layer, NHL (NHL = 10, 12, . . . , 22) on resultquality and 2) evaluation of the benefits on result qualityfor using bootstrap/aggregated NNs [11]. In particular, wehave adopted, for bootstrap NNs, 30 different permutationsmade on the same training set, i.e., 30 different NNs havebeen trained for each value of NHL, everyone referred to adifferent single permutation. The statistical results related toMSE on test set for these NNs are summarized in Table I.As it is possible to see the best performance in terms ofminimum MSE is obtained for NHL = 20, but very similarresults are achieved by NNs with NHL = 14, 16, and 18. Inparticular, it is worth noticing that the higher values for bestperformance (minimum MSE) as well as the higher valuesfor standard deviation related to NNs with NHL = 22 suggestthat overtraining occurs. With the aim to evaluate the effectof bootstrap/aggregation technique, that is the second goalof the test, the previous trained NNs have been aggregated:practically, for every value of NHL, the 30 NNs (from 14 to 20)were sorted in terms of best performance and then aggregated.The obtained results in terms of MSE on test set are shown inFig. 3. It is possible to observe that the bootstrap/aggregationtechnique significantly improves the prediction: even if eachNN predicts producing some errors in different points, theseerrors are not located in the same points for all NNs. Then,from a global point of view, with aggregated NNs, the error isstrongly reduced and now it is possible to achieve MSEs, on

Fig. 3. MSE on test set obtained by different NPs versus the number ofaggregated NNs.

test set, equal to: 0.095 for 12 aggregated NNs with NHL = 14;0.107 for three aggregated NNs with NHL = 20 and for sevenaggregated NNs with NHL = 18; and 0.112 for 19 aggregatedNNs with NHL = 16. All these values, even the worse one,are always better than the best MSE obtained by means ofthe ES model. With regards to the computational cost of theNP based on bootstrap/aggregated NNs, it is a bit higher thanthose related to a use of a single NN, this because of the NNaggregation. Anyway, it is still quite acceptable (about 2 s tocompute all the 180 values used).

C. Application of Single NN With Noise Added to theData Set (Test #2)

The second test performed regards the expansion of thetraining data set using additive noise [12], [13]. An enlargeddata set has been obtained using 10 random white noise vectorswith variance σ 2 = 10−4 and by adding these ones to theoriginal 140 samples: thus, the noise extended training dataset has overall 1400 samples. In addition, in this case, severalNNs have been trained using 30 permutations of the trainingdata set and varying NHL in the range 10–40 (a wider trainingset should require an higher value of NHL). As stated before,all the epochs (1000) have been executed during the training,and the elapsed time for training a single NN has been equal to100 s. The prediction performances have been shown in Fig. 4:the dot points are used for representing the MSE obtainedat each training (both on test and whole measurement dataset); the solid lines, instead, link the best MSE related to bestperforming NNs in terms of MSE on test set (the best NNs,in terms of generalization capability, are those which performthe best MSE on test set). The best performance MSE wasobtained using NHL = 21 and was equal to 0.1202: this valueis better than the one obtained by standard training (0.1292),i.e., without noise added. This demonstrates that by addingnoise, it is possible to slightly improve the performance ofthe NP, especially in those cases in which just small datasets are available. However, the performance is worse thanthe one obtained using bootstrap/aggregated NNs. In addition,

Page 4: A New Neural Predictor for ELF Magnetic Field Strength

7001504 IEEE TRANSACTIONS ON MAGNETICS, VOL. 50, NO. 2, FEBRUARY 2014

Fig. 4. MSE on test set and on the whole measurement data set versus thenumber of neurons of hidden layer. Dots: MSE of each training. Solid lines:link the MSE of the best performing NNs.

the time required for the training is not negligible due to thelarge training data set employed.

D. Noise Added and Bootstrap/Aggregated NNs (Test #3)

In this section, the performance obtained using a combi-nation between the two previous techniques is shown. Thebootstrap has been made using 30 different permutations ofthe noisy expanded training set. We used the previously trainedNNs related to Test #2 and then we have repeated the proce-dure for the bootstrap used in Test #1. The trend of the MSEon test set versus the number of aggregated NNs is similar tothose shown in Fig. 3. For example, for NHL values within therange 20–24, we have measured an MSE on test set equal to0.1. Thus, the adoption of a combination of the two approachesseems to be less incisive in the improvement of the results,since results are quite similar to those obtained using only thebootstrap, as presented in Test #1. Anyway, in just one case,the performance has been observed to be enhanced. Indeed,if we do not sort all the NNs without considering their NHL,but by selecting for bootstrap those NNs showing the bestMSEs, we have observed an improvement on performance:the obtained MSE becomes equal to 0.073. This last value isthe best result achieved for any kind of test.

IV. CONCLUSION

An NP-based on aggregated NNs for the evaluation ofELF magnetic field maps in indoor/outdoor environment hasbeen tested in practical problems, regarding the prediction ofmagnetic fields in the proximity of electrical appliance. Fromthe comparison of the results on test set, it was possible to con-clude that: 1) ES models are often inappropriate to solve thiskind of prediction problem and 2) bootstrap aggregated NNsare the most suitable solution also in terms of computationalcost required to set up the NP. Finally, the results obtained

exhibit a close agreement between predicted and measuredfield values.

REFERENCES

[1] IEEE Standard for Safety Levels With Respect to Human Exposure toElectromagnetic Fields, 0-3 kHz, IEEE Standard C95.6-2002, 2002.

[2] World Health Organization, “Extremely low frequency fields,” Environ-mental Health Criteria Monograph, no. 238, 2007.

[3] ICNIRP, “Guidelines for limiting exposure to time-varying electric andmagnetic fields (1 Hz to 100 kHz),” Health Phys., vol. 99, no. 6,pp. 818–836, 2010.

[4] K. Yamazaki and T. Kawamoto, “Simple estimation of equivalent mag-netic dipole moment to characterize ELF magnetic fields generated byelectric appliances incorporating harmonics,” IEEE Trans. Electromagn.Compat., vol. 43, no. 2, pp. 240–245, May 2001.

[5] A. Canova, F. Freschi, M. Repetto, and M. Tartaglia, “Identification ofan equivalent-source system for magnetic stray field evaluation,” IEEETrans. Power Del., vol. 24, no. 3, pp. 1352–1358, Jul. 2009.

[6] G. Capizzi, S. Coco, C. Giuffrida, A. Laudani, G. Pappalardo, andR. Pulvirenti, “An MLP predictor of ELF environmental magnetic fieldspollution,” in Neural Networks and Soft Computing, L. Rutkowski andJ. Kacprzyk, Eds. Heidelberg, Germany: Physica-Verlag HD, 2003,pp. 814–819.

[7] I. Vilovic and N. Burum, “A comparison of MLP and RBF neuralnetworks architectures for electromagnetic field prediction in indoorenvironments,” in Proc. EUCAP, 2011, pp. 1719–1723.

[8] C. Belhadj and S. El-Ferik, “Electric and magnetic fields estimationfor live transmission line right of way workers using artificial neuralnetwork,” in Proc. 15th Int. Conf. ISAP, 2009, pp. 1–6.

[9] P. DhanaLakshmi, L. Kalaivani, and P. Subburaj, “Analysis of magneticfield distribution in power system using finite element method,” in Proc.Int. Conf. ICCCET, 2011, pp. 394–399.

[10] V. Rankovic and J. Radulovic, “Environmental pollution by magneticfield around power lines,” Int. J. Qual. Res., vol. 3, no. 3, pp. 269–273,2009.

[11] R. Lanouette, J. Thibault, and J. L. Valade, “Process modeling withneural networks using small experimental datasets,” Comput. Chem.Eng., vol. 23, no. 9, pp. 1167–1176, 1999.

[12] K. Matsuoka, “Noise injection into inputs in back-propagation learn-ing,” IEEE Trans. Syst., Man Cybern., vol. 22, no. 3, pp. 436–440,May/Jun. 1992.

[13] G. Capizzi, S. Coco, A. Laudani, and R. Pulvirenti, “A multilayerperceptron neural model for the differentiation of Laplacian 3-D finite-element solutions,” IEEE Trans. Magn., vol. 39, no. 3, pp. 1277–1280,May 2003.

[14] F. Riganti Fulginei and A. Salvini, “Neural network approach formodelling hysteretic magnetic materials under distorted excitations,”IEEE Trans. Magn., vol. 48, no. 2, pp. 307–310, Feb. 2012.

[15] F. Riganti Fulginei, A. Laudani, A. Salvini, and M. Parodi, “Automaticand parallel optimized learning for neural networks performing MIMOapplications,” Adv. Electr. Comput. Eng., vol. 13, no. 1, pp. 3–12,2013.

[16] S. Paul, D. Bobba, N. Paudel, and J. Bird, “Source field modeling inair using magnetic charge sheets,” IEEE Trans. Magn., vol. 48, no. 11,pp. 3879–3882, Nov. 2012.

[17] M. R. Barzegaran and O. Mohammed, “A generalized equivalent sourcemodel of AC electric machines for numerical electromagnetic fieldsignature studies,” IEEE Trans. Magn., vol. 48, no. 11, pp. 4440–4443,Nov. 2012.

[18] L. Schmerber, L. Rouve, and A. Foggia, “Spherical harmonic identifica-tion using a priori information about an electrical device,” IEEE Trans.Magn., vol. 43, no. 4, pp. 1781–1784, Apr. 2007.

[19] S. Coco and A. Laudani, “An iterative procedure for equiva-lent source representation of focusing magnetic fields in TWT,”Int. J. Comput. Math. Electr. Electron. Eng., vol. 26, no. 2, pp. 317–326,2007.

[20] S. Coco, A. Laudani, F. Fulginei Riganti, and A. Salvini, “Accuratedesign of Helmholtz coils for ELF bioelectromagnetic interaction bymeans of continuous FSO,” Int. J. Appl. Electromagn. Mech., vol. 39,no. 1, pp. 651–656, 2012.