11
Research Article Aero Engine Fault Diagnosis Using an Optimized Extreme Learning Machine Xinyi Yang, 1 Shan Pang, 2 Wei Shen, 1 Xuesen Lin, 1 Keyi Jiang, 1 and Yonghua Wang 1 1 Department of Aerocraſt Engineering, Naval Aeronautical and Astronautical University, Yantai 264001, China 2 College of Information and Electrical Engineering, Ludong University, Yantai 264025, China Correspondence should be addressed to Xinyi Yang; [email protected] Received 1 June 2015; Accepted 13 October 2015 Academic Editor: Roger L. Davis Copyright © 2016 Xinyi Yang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A new extreme learning machine optimized by quantum-behaved particle swarm optimization (QPSO) is developed in this paper. It uses QPSO to select optimal network parameters including the number of hidden layer neurons according to both the root mean square error on validation data set and the norm of output weights. e proposed Q-ELM was applied to real-world classification applications and a gas turbine fan engine diagnostic problem and was compared with two other optimized ELM methods and original ELM, SVM, and BP method. Results show that the proposed Q-ELM is a more reliable and suitable method than conventional neural network and other ELM methods for the defect diagnosis of the gas turbine engine. 1. Introduction e gas turbine engine is a complex system and has been used in many fields. One of the most important applications of gas turbine engine is the propulsion system in aircraſt. During its operation life, the gas turbine engine performance is affected by a lot of physical problems, including corrosion, erosion, fouling, and foreign object damage [1]. ese may cause the engine performance deterioration and engine faults. ere- fore, it is very important to develop engine diagnostics system to detect and isolate the engine faults for safe operation of an aircraſt and reduced engine maintenance cost. Engine fault diagnosis methods are mainly divided into two categories: model-based and data-driven techniques. Model-based techniques have advantages in terms of on- board implementation considerations. But they need an engine mathematical model and their reliability oſten decreases as the system nonlinear complexities and modeling uncertainties increase [2]. On the other hand, data-driven approaches do not need any system model and primarily rely on collected historical data from the engine sensors. ey show great advantage over model-based techniques in many engine diagnostics applications. Among these data-driven approaches, the artificial neural network (ANN) [3, 4] and the support vector machine (SVM) [5, 6] are two of the most commonly used techniques. Applications of neural networks and SVM in engine fault diagnosis have been widely studied in the literature. Zedda and Singh [7] proposed a modular diagnostic system for a dual spool turbofan gas turbine using neural networks. Romessis et al. [8] applied a probabilistic neural network (PNN) to diagnose faults on turbofan engines. Volponi et al. [9] applied Kalman Filter and neural network methodologies to gas turbine performance diagnostics. Vanini et al. [2] developed fault detection and isolation (FDI) scheme for an aircraſt jet engine. e proposed FDI system utilizes dynamic neural networks (DNNs) to simulate different operating model of the healthy engine or the faulty condition of the jet engine. Lee et al. [10] proposed a hybrid method of an artificial neural network combined with a support vector machine and have applied the method to the defect diagnostic of a SUAV gas turbine engine. However, conventional ANN has some weak points: it needs many training data and the traditional learning algo- rithms are usually far slower than required. It may fall in the local minima instead of the global minima. In case of gas turbine engine diagnostics, however, the operating range is so wide. If the conventional ANN is applied to this Hindawi Publishing Corporation International Journal of Aerospace Engineering Volume 2016, Article ID 7892875, 10 pages http://dx.doi.org/10.1155/2016/7892875

Research Article Aero Engine Fault Diagnosis Using an ...downloads.hindawi.com/journals/ijae/2016/7892875.pdf · Aero Engine Fault Diagnosis Using an Optimized Extreme Learning Machine

Embed Size (px)

Citation preview

Research ArticleAero Engine Fault Diagnosis Using an Optimized ExtremeLearning Machine

Xinyi Yang1 Shan Pang2 Wei Shen1 Xuesen Lin1 Keyi Jiang1 and Yonghua Wang1

1Department of Aerocraft Engineering Naval Aeronautical and Astronautical University Yantai 264001 China2College of Information and Electrical Engineering Ludong University Yantai 264025 China

Correspondence should be addressed to Xinyi Yang jimyang2008163com

Received 1 June 2015 Accepted 13 October 2015

Academic Editor Roger L Davis

Copyright copy 2016 Xinyi Yang et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

A new extreme learning machine optimized by quantum-behaved particle swarm optimization (QPSO) is developed in this paperIt uses QPSO to select optimal network parameters including the number of hidden layer neurons according to both the rootmean square error on validation data set and the norm of output weights The proposed Q-ELM was applied to real-worldclassification applications and a gas turbine fan engine diagnostic problem and was compared with two other optimized ELMmethods and original ELM SVM and BP method Results show that the proposed Q-ELM is a more reliable and suitable methodthan conventional neural network and other ELMmethods for the defect diagnosis of the gas turbine engine

1 Introduction

Thegas turbine engine is a complex system and has been usedin many fields One of the most important applications of gasturbine engine is the propulsion system in aircraft During itsoperation life the gas turbine engine performance is affectedby a lot of physical problems including corrosion erosionfouling and foreign object damage [1] These may cause theengine performance deterioration and engine faults There-fore it is very important to develop engine diagnostics systemto detect and isolate the engine faults for safe operation of anaircraft and reduced engine maintenance cost

Engine fault diagnosis methods are mainly divided intotwo categories model-based and data-driven techniquesModel-based techniques have advantages in terms of on-board implementation considerations But they need anengine mathematical model and their reliability oftendecreases as the system nonlinear complexities andmodelinguncertainties increase [2] On the other hand data-drivenapproaches do not need any systemmodel and primarily relyon collected historical data from the engine sensors Theyshow great advantage over model-based techniques in manyengine diagnostics applications Among these data-drivenapproaches the artificial neural network (ANN) [3 4] and

the support vector machine (SVM) [5 6] are two of the mostcommonly used techniques

Applications of neural networks and SVM in engine faultdiagnosis have been widely studied in the literature Zeddaand Singh [7] proposed a modular diagnostic system for adual spool turbofan gas turbine using neural networksRomessis et al [8] applied a probabilistic neural network(PNN) to diagnose faults on turbofan engines Volponi et al[9] applied Kalman Filter and neural network methodologiesto gas turbine performance diagnostics Vanini et al [2]developed fault detection and isolation (FDI) scheme for anaircraft jet engineThe proposed FDI system utilizes dynamicneural networks (DNNs) to simulate different operatingmodel of the healthy engine or the faulty condition of thejet engine Lee et al [10] proposed a hybrid method of anartificial neural network combined with a support vectormachine andhave applied themethod to the defect diagnosticof a SUAV gas turbine engine

However conventional ANN has some weak points itneeds many training data and the traditional learning algo-rithms are usually far slower than required It may fall in thelocal minima instead of the global minima In case of gasturbine engine diagnostics however the operating rangeis so wide If the conventional ANN is applied to this

Hindawi Publishing CorporationInternational Journal of Aerospace EngineeringVolume 2016 Article ID 7892875 10 pageshttpdxdoiorg10115520167892875

2 International Journal of Aerospace Engineering

case the classification performance may decrease becauseof the increasing nonlinearity of engine behavior in a wideoperating range [11]

In recent years a novel learning algorithm for single hid-den layer neural networks called extreme learning machine(ELM) has been proposed and shows better performance onclassification problem thanmany conventionalANN learningalgorithms and SVM [12ndash14] In ELM the input weightsand hidden biases are randomly generated and the outputweights are calculated by Moore-Penrose (MP) generalizedinverse ELM learns much faster with higher generaliza-tion performance than the traditional gradient-based learn-ing algorithms such as back-propagation and Levenberg-Marquardt method Also ELM avoids many problems facedby traditional gradient-based learning algorithms such asstopping criteria learning rate and local minima problem

Therefore ELM should be a promising method for gasturbine engine diagnostics However ELMmay require morehidden neurons than traditional gradient-based learningalgorithms and lead to ill-conditioned problem because ofthe random selection of the input weights and hidden biases[15] To address these problems in this paper we proposedan optimized ELM using quantum-behaved particle swarmoptimization (Q-ELM) and applied it to the fault diagnosticsof a gas turbine fan engine

The rest of the paper is organized as follows Section 2gives a brief review of ELMQPSO algorithm is overviewed inSection 3 Section 4 presents the proposed Q-ELM Section 5compares the Q-ELM with other methods on three real-world classification applications In Section 6 Q-ELM isapplied to gas turbine fan engine component fault diagnosticsapplications followed by the conclusions in Section 7

2 Brief of Extreme Learning Machine

Extreme learning machine was proposed by Huang et al[12] For 119873 arbitrary distinct samples (x

119894 t119894) where x

119894=

[1199091198941 1199091198942 119909

119894119899]119879isin R119899 and t

119894= [1199051198941 1199051198942 119905

119894119898]119879isin R119898 stan-

dard SLFN with 119870 hidden neurons and activation function119892(119909) can approximate these119873 samples with zero error whichmeans that

H120573 = T (1)

where H = ℎ119894119895 (119894 = 1 119873 and 119895 = 1 119870) is the

hidden layer output matrix ℎ119894119895= 119892(w

119895sdot x119894+ 119887119895) denotes

the output of 119895th hidden neuron with respect to x119894 and

w119895= [1199081198951 1199081198952 119908

119895119899]119879 is the weight connecting 119895th hidden

neuron and input neurons 119887119895denotes the bias of 119895th hidden

neuron And w119895sdot x119894is the inner product of w

119895and x

119894 120573 =

[12057311205732 120573

119896]119879 is the matrix of output weights and 120573

119895=

[1205731198951 1205731198952 120573

119895119898]119879 (119895 = 1 119870) is the weight vector con-

necting the 119895th hidden neuron and output neurons And T =[t1 t2 t

119873]119879 is the matrix of desired output

Therefore the determination of the output weights is tofind the least square (LS) solutions to the given linear systemThe minimum norm LS solution to linear system (1) is

=H+T (2)

where H+ is the MP generalized inverse of matrix H Theminimum norm LS solution is unique and has the smallestnorm among all the LS solutions ELM uses MP inversemethod to obtain good generalization performance withdramatically increased learning speed

3 Brief of Quantum-Behaved ParticleSwarm Optimization

Recently some population based optimization algorithmshave been applied to real-world optimization applicationsand show better performance than traditional optimiza-tion methods Among them genetic algorithm (GA) andparticle swarm optimization (PSO) are two mostly usedalgorithms GA was originally motivated by Darwinrsquos naturalevolution theory It repeatedly modifies a population ofindividual solutions by three genetic operators selectioncrossover and mutation operator On the other hand PSOwas inspired by social behavior of bird flocking Howeverunlike GA PSO does not need any genetic operators and issimple in use comparedwithGAThedynamics of populationin PSO resembles the collective behavior of socially intelligentorganisms However PSO has some problems such as pre-mature or local convergence and is not a global optimizationalgorithm

QPSO is a novel optimization algorithm inspired bythe fundamental theory of particle swarm optimizationand features of quantum mechanics [16] The introductionof quantum mechanics helps to diversify the populationand ameliorate convergence by maintaining more attractorsThus it improves the QPSOrsquos performance and solves thepremature or local convergence problem of PSO and showsbetter performance than PSO in many applications [17]Therefore it is more suitable for ELM parameter optimizationthan GA and PSO

In QPSO the state of a particle y is depicted bySchrodinger wave function 120595(y 119905) instead of position andvelocity The dynamic behavior of the particle is widelydivergent from classical PSO in that the exact values ofposition and velocity cannot be determined simultaneouslyThe probability of the particlersquos appearing in apposition canbe calculated from probability density function |120595(y 119905)|2 theform of which depends on the potential field the particle liesin Employing the Monte Carlo method for the 119894th particley119894from the population the particle moves according to the

following iterative equation

y119894119895 (119905 + 1) = P

119894119895(119905) minus 120573 sdot (mBest

119895 (119905) minus y119894119895 (119905))

sdot ln( 1

119906119894119895 (119905)

) if 119896 ge 05

y119894119895 (119905 + 1) = P

119894119895(119905) + 120573 sdot (mBest

119895 (119905) minus y119894119895 (119905))

sdot ln( 1

119906119894119895 (119905)

) if 119896 lt 05

(3)

International Journal of Aerospace Engineering 3

where y119894119895(119905 + 1) is the position of the 119894th particle with respect

to the 119895th dimension in iteration 119905 P119894119895is the local attractor of

119894th particle to the 119895th dimension and is defined as

P119894119895 (119905) = 120593119895 (119905) sdot pBest119894119895 (119905)

+ (1 minus 120593119895 (119905)) gBest119895 (119905)

(4)

mBest119895 (119905) =

1

119873119901

119873119901

sum

119894=1

pBest119894119895(119905) (5)

where 119873119901is the number of particles and pBest

119894represents

the best previous position of the 119894th particle gBest is theglobal best position of the particle swarm mBest is the meanbest position defined as the mean of all the best positions ofthe population 119896 119906 and 120593 are random number distributeduniformly in [0 1] respectively 120573 is called contraction-expansion coefficient and is used to control the convergencespeed of the algorithm

4 Extreme Learning MachineOptimized by QPSO

Because the output weights in ELM are calculated usingrandom input weights and hidden biases theremay exist a setof nonoptimal or even unnecessary input weights and hiddenneurons As a result ELM may need more hidden neuronsthan conventional gradient-based learning algorithms andlead to an ill-conditioned hidden outputmatrix which wouldcause worse generalization performance

In this section we proposed a new algorithm named Q-ELM to solve these problems Unlike some other optimizedELM algorithms our proposed algorithm optimizes not onlythe input weights and hidden biases using QPSO but also thestructure of the neural network (hidden layer neurons) Thedetailed steps of the proposed algorithm are as follows

Step 1 (initializing) Firstly we generate the population ran-domly Each particle in the population is constituted by a setof input weights hidden biases and 119904-variables

p119894= [11990811 119908

119873119870 1198871 119887

119870 119904

1 119904

119870] (6)

where 119904119894 119894 = 1 ℎ is a variable which defines the structure

of the network As illustrated in Figure 1 if 119904119894= 0 then the 119894th

hidden neuron is not considered Otherwise if 119904119894= 1 the 119894th

hidden neuron is retained and the sigmoid function is usedas its activation function

All components constituting a particle are randomlyinitialized within the range [0 1]

Step 2 (fitness evaluation) The corresponding outputweightsof each particle are computed according to (5) Then thefitness of each particle is evaluated by the root mean squareerror between the desired output and estimated output Toavoid the problem of overfitting the fitness evaluation isperformed on the validation dataset instead of the wholetraining dataset

119891 (P119894) = radic

sum119873V119895=1sum119870

119894=1120573119894119892 (w119894sdot w119895+ 119887119894) minus t119895

119873V

(7)

where119873V is the number of samples in the validation dataset

Step 3 (updating pBest119894and gBest) With the fitness values

of all particles in population the best previous position for119894th particle pBest

119894 and the global best position gBest of each

particle are updated As suggested in [15] neural networktends to have better generalization performance with theweights of smaller norm Therefore in this paper the fitnessvalue and the norm of output weights are considered togetherfor updating pBest

119894and gBest The updating strategy is as

follows

pBest119894=

p119894 (119891 (pBest

119894) minus 119891 (p

119894) gt 120578119891 (pBest

119894)) or (1003816100381610038161003816119891 (pBest119894) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (pBest119894) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wopBest

119894

10038171003817100381710038171003817)

pBest119894 else

gBest=

p119894 (119891 (gBest) minus 119891 (p

119894) gt 120578119891 (gBest)) or (1003816100381610038161003816119891 (gBest) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (gBest) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wogBest

10038171003817100381710038171003817)

gBest else

(8)

where 119891(p119894) 119891(pBest

119894) and 119891(gBest) are the fitness value of

the 119894th particlersquos position the best previous position of the119894th particle and the global best position of the swarm wo

119901119894

wopBest119894

and wogBest are the corresponding output weightsof the position of the 119894th particle the best previous positionof the 119894th particle and the global best position obtained byMP inverse By this updating criterion particles with smallerfitness values or smaller norms are more likely to be selectedas pBest

119894or gBest

Step 4 Calculate each particlersquos local attractor P119894and mean

best position mBest according to (4) and (5)

Step 5 Update particlersquos new position according to (3)Finally we repeat Step 2 to Step 5 until the maximum

number of iterations is reached Thus the network trained byELM with the optimized input weights and hidden biases isobtained and then the optimized network is applied to thebenchmark problems

4 International Journal of Aerospace Engineering

Table 1 Specification of 3 classification problems

Names Attributes Classes Number of samplesTraining data set Validation set Testing set

Satellite image 36 7 2661 1774 2000Image segmentation 19 7 1000 524 786Diabetes problem 8 2 346 230 192

Table 2 Mean training and testing accuracy of six methods on different classification problems

Algorithms Satellite image Image segmentation Diabetes problemTraining Testing Training Testing Training Testing

Q-ELM 0890 0876 0965 0960 0854 0842P-ELM 0884 0872 0938 0947 0836 0813G-ELM 0877 0869 0938 0934 0812 0804ELM 0873 0852 0930 0921 0783 0773SVM 0879 0870 0931 0916 0792 0781BP 0856 0849 0919 0892 0785 0776

x1

x2

xn

S1

S2

SN

h1

h2

hN

O1

Om

middot middot middot middot middot middot middot middot middot

middot middot middot

Figure 1 Single hidden layer feedforward network with 119904-variable

In the proposed algorithm each particle represents onepossible solution to the optimization problem and is a com-bination of components with different meaning and differentrange

All components of a particle are firstly initialized intocontinuous values between 0 and 1 Therefore before calcu-lating corresponding output weights and fitness evaluation inStep 2 they need to be converted to their real value

For the input weights and biases they are given by

119911119894119895= (119911

max119897

minus 119911min119897) 119901119894119895+ 119911

min119897 (9)

where 119911max119897

= 1 and 119911min119897= minus1 are the upper and lower bound

for input weights and hidden biasesFor 119904-parameters they are given by

119911119894119895= round (119901

119894119895) (10)

where round( ) is a function that rounds 119901119894119895to the nearest

integer (0 or 1 in this case) After the conversion of allcomponents of a particle the fitness of each particle can bethen evaluated

Table 3 Mean training time of six methods on different classifica-tion problems (in second)

Algorithms Satelliteimage

Imagesegmentation Diabetes problem

Q-ELM 17856 4536 1923P-ELM 19833 3908 2069G-ELM 27407 5426 2212ELM 0034 0027 0011SVM 954 361 143BP 3475 1692 306

5 Evaluation on SomeClassification Applications

In essence engine diagnosis is a pattern classification prob-lem Therefore in this section we firstly apply the developedQ-ELM on some real-world classification applications andcompare it with five existing algorithms They are PSOoptimized ELM (P-ELM) [18] genetic algorithm optimized(G-ELM) [18] standard ELM BP and SVM

The performances of all algorithms are tested on threebenchmark classification datasets which are listed in Table 1The training dataset validation dataset and testing datasetare randomly generated at each trial of simulations accordingto the corresponding numbers in Table 1 The performancesof these algorithms are listed in Tables 2 and 3

For the three optimized ELMs the population size is100 and the maximum number of iterations is 50 Theselection criteria for the P-ELMs and Q-ELM include thenorm of output weights as (8) while the selection criterionfor G-ELM considers only testing accuracy on validationdataset and does not include the norm of output weights assuggested in [19] Instead G-ELM incorporates Tikhonovrsquosregularization in the least squares algorithm to improve thenet generalization capability

International Journal of Aerospace Engineering 5

Fuel

B

G

D IC E F J

H

A

Fuel

1 2

13 1513 15

2622 3 4 44 5 65 68 8

A inletB fanC high pressure compressorD main combustorE high pressure turbine

F low pressure turbineG external ductH mixerI afterburnerJ nozzle

Figure 2 Schematic of studied turbine fan engine

In G-ELM the probability of crossover is 05 and themutation probability is 10 In Q-ELM and P-ELM theinertial weight is set to decrease from 12 to 04 linearlywith the iterations In Q-ELM the contraction-expansioncoefficient 120573 is set to decrease from 10 to 05 linearly with theiterations ELM methods are set with different initial hiddenneurons according to different applications

There are many variants of BP algorithm a faster BPalgorithm called Levenberg-Marquardt algorithm is used inour simulations And it has a very efficient implementationof Levenberg-Marquardt algorithm provided by MATLABpackage As SVM is binary classifier here the SVMalgorithmhas been expanded to ldquoOne versus Onerdquo Multiclass SVM toclassify themultiple fault classesThe parameters for the SVMare 119862 = 10 and 120574 = 2 [20] The imposed noise level119873

119897= 01

In order to account for the stochastic nature of thesealgorithms all of the six methods are run 10 times separatelyfor each classification problem and the results shown inTables 2 and 3 are the mean performance values in 10trials All simulations have been made in MATLAB R2008aenvironment running on a PCwith 25GHzCPUwith 2 coresand 2GBRAM

It can be concluded from Table 2 that in general theoptimized ELMmethod obtained better classification resultsthan ELM SVM and BP Q-ELM outperforms all the othermethods It obtains the best mean testing and training accu-racy on all these three classification problems This suggeststhat Q-ELM is a good choice for engine fault diagnosisapplication

Also it can be observed clearly that the training times ofthree optimized ELM methods are much more than the oth-ersThis mainly is because the optimized ELMmethods needto repeatedly execute some steps of parameters optimizationAnd ELM costs the least training time among these methods

6 Engine Diagnosis Applications

61 Engine Selection and Modeling The developed Q-ELMwas also applied to fault diagnostics of a gas turbine engine

Table 4 Description of two engine operating points

Operatingpoint

Fuel flowkgs

Environment setting parametersVelocity (Mach

number) Altitude (km)

A 1142 08 58B 1155 12 69

and was compared with other methods In this study wefocus on a two-shaft turbine fan engine with a mixer andan afterburner (for confidentiality reasons the engine typeis omitted) This engine is composed of several componentssuch as low pressure compressor (LPC) or fan high pressurecompressor (HPC) low pressure turbine (LPT) and highpressure turbine (HPT) and can be illustrated as shown inFigure 2

The gas turbine engine is susceptible to a lot of physicalproblems and these problems may result in the componentfault and reduce the component flow capacity and isentropicefficiencyThese component faults can result in the deviationsof some engine performance parameters such as pressuresand temperatures across different engine components It isa practical way to detect and isolate the default componentusing engine performance data

But the performance data of real engine with componentfault is very difficult to collect thus the component fault isusually simulated by enginemathematicalmodel as suggestedin [10 11]

We have already developed a performance model for thistwo-shaft turbine fan engine in MATLAB environment Inthis study we use the performance model to simulate thebehavior of the engine with or without component faults

The engine component faults can be simulated by isen-tropic efficiency deterioration of different engine compo-nents By implanting corresponding component defects withcertainmagnitude of isentropic efficiency deterioration to theengine performance model we can obtain simulated engineperformance parameter data with component fault

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

2 International Journal of Aerospace Engineering

case the classification performance may decrease becauseof the increasing nonlinearity of engine behavior in a wideoperating range [11]

In recent years a novel learning algorithm for single hid-den layer neural networks called extreme learning machine(ELM) has been proposed and shows better performance onclassification problem thanmany conventionalANN learningalgorithms and SVM [12ndash14] In ELM the input weightsand hidden biases are randomly generated and the outputweights are calculated by Moore-Penrose (MP) generalizedinverse ELM learns much faster with higher generaliza-tion performance than the traditional gradient-based learn-ing algorithms such as back-propagation and Levenberg-Marquardt method Also ELM avoids many problems facedby traditional gradient-based learning algorithms such asstopping criteria learning rate and local minima problem

Therefore ELM should be a promising method for gasturbine engine diagnostics However ELMmay require morehidden neurons than traditional gradient-based learningalgorithms and lead to ill-conditioned problem because ofthe random selection of the input weights and hidden biases[15] To address these problems in this paper we proposedan optimized ELM using quantum-behaved particle swarmoptimization (Q-ELM) and applied it to the fault diagnosticsof a gas turbine fan engine

The rest of the paper is organized as follows Section 2gives a brief review of ELMQPSO algorithm is overviewed inSection 3 Section 4 presents the proposed Q-ELM Section 5compares the Q-ELM with other methods on three real-world classification applications In Section 6 Q-ELM isapplied to gas turbine fan engine component fault diagnosticsapplications followed by the conclusions in Section 7

2 Brief of Extreme Learning Machine

Extreme learning machine was proposed by Huang et al[12] For 119873 arbitrary distinct samples (x

119894 t119894) where x

119894=

[1199091198941 1199091198942 119909

119894119899]119879isin R119899 and t

119894= [1199051198941 1199051198942 119905

119894119898]119879isin R119898 stan-

dard SLFN with 119870 hidden neurons and activation function119892(119909) can approximate these119873 samples with zero error whichmeans that

H120573 = T (1)

where H = ℎ119894119895 (119894 = 1 119873 and 119895 = 1 119870) is the

hidden layer output matrix ℎ119894119895= 119892(w

119895sdot x119894+ 119887119895) denotes

the output of 119895th hidden neuron with respect to x119894 and

w119895= [1199081198951 1199081198952 119908

119895119899]119879 is the weight connecting 119895th hidden

neuron and input neurons 119887119895denotes the bias of 119895th hidden

neuron And w119895sdot x119894is the inner product of w

119895and x

119894 120573 =

[12057311205732 120573

119896]119879 is the matrix of output weights and 120573

119895=

[1205731198951 1205731198952 120573

119895119898]119879 (119895 = 1 119870) is the weight vector con-

necting the 119895th hidden neuron and output neurons And T =[t1 t2 t

119873]119879 is the matrix of desired output

Therefore the determination of the output weights is tofind the least square (LS) solutions to the given linear systemThe minimum norm LS solution to linear system (1) is

=H+T (2)

where H+ is the MP generalized inverse of matrix H Theminimum norm LS solution is unique and has the smallestnorm among all the LS solutions ELM uses MP inversemethod to obtain good generalization performance withdramatically increased learning speed

3 Brief of Quantum-Behaved ParticleSwarm Optimization

Recently some population based optimization algorithmshave been applied to real-world optimization applicationsand show better performance than traditional optimiza-tion methods Among them genetic algorithm (GA) andparticle swarm optimization (PSO) are two mostly usedalgorithms GA was originally motivated by Darwinrsquos naturalevolution theory It repeatedly modifies a population ofindividual solutions by three genetic operators selectioncrossover and mutation operator On the other hand PSOwas inspired by social behavior of bird flocking Howeverunlike GA PSO does not need any genetic operators and issimple in use comparedwithGAThedynamics of populationin PSO resembles the collective behavior of socially intelligentorganisms However PSO has some problems such as pre-mature or local convergence and is not a global optimizationalgorithm

QPSO is a novel optimization algorithm inspired bythe fundamental theory of particle swarm optimizationand features of quantum mechanics [16] The introductionof quantum mechanics helps to diversify the populationand ameliorate convergence by maintaining more attractorsThus it improves the QPSOrsquos performance and solves thepremature or local convergence problem of PSO and showsbetter performance than PSO in many applications [17]Therefore it is more suitable for ELM parameter optimizationthan GA and PSO

In QPSO the state of a particle y is depicted bySchrodinger wave function 120595(y 119905) instead of position andvelocity The dynamic behavior of the particle is widelydivergent from classical PSO in that the exact values ofposition and velocity cannot be determined simultaneouslyThe probability of the particlersquos appearing in apposition canbe calculated from probability density function |120595(y 119905)|2 theform of which depends on the potential field the particle liesin Employing the Monte Carlo method for the 119894th particley119894from the population the particle moves according to the

following iterative equation

y119894119895 (119905 + 1) = P

119894119895(119905) minus 120573 sdot (mBest

119895 (119905) minus y119894119895 (119905))

sdot ln( 1

119906119894119895 (119905)

) if 119896 ge 05

y119894119895 (119905 + 1) = P

119894119895(119905) + 120573 sdot (mBest

119895 (119905) minus y119894119895 (119905))

sdot ln( 1

119906119894119895 (119905)

) if 119896 lt 05

(3)

International Journal of Aerospace Engineering 3

where y119894119895(119905 + 1) is the position of the 119894th particle with respect

to the 119895th dimension in iteration 119905 P119894119895is the local attractor of

119894th particle to the 119895th dimension and is defined as

P119894119895 (119905) = 120593119895 (119905) sdot pBest119894119895 (119905)

+ (1 minus 120593119895 (119905)) gBest119895 (119905)

(4)

mBest119895 (119905) =

1

119873119901

119873119901

sum

119894=1

pBest119894119895(119905) (5)

where 119873119901is the number of particles and pBest

119894represents

the best previous position of the 119894th particle gBest is theglobal best position of the particle swarm mBest is the meanbest position defined as the mean of all the best positions ofthe population 119896 119906 and 120593 are random number distributeduniformly in [0 1] respectively 120573 is called contraction-expansion coefficient and is used to control the convergencespeed of the algorithm

4 Extreme Learning MachineOptimized by QPSO

Because the output weights in ELM are calculated usingrandom input weights and hidden biases theremay exist a setof nonoptimal or even unnecessary input weights and hiddenneurons As a result ELM may need more hidden neuronsthan conventional gradient-based learning algorithms andlead to an ill-conditioned hidden outputmatrix which wouldcause worse generalization performance

In this section we proposed a new algorithm named Q-ELM to solve these problems Unlike some other optimizedELM algorithms our proposed algorithm optimizes not onlythe input weights and hidden biases using QPSO but also thestructure of the neural network (hidden layer neurons) Thedetailed steps of the proposed algorithm are as follows

Step 1 (initializing) Firstly we generate the population ran-domly Each particle in the population is constituted by a setof input weights hidden biases and 119904-variables

p119894= [11990811 119908

119873119870 1198871 119887

119870 119904

1 119904

119870] (6)

where 119904119894 119894 = 1 ℎ is a variable which defines the structure

of the network As illustrated in Figure 1 if 119904119894= 0 then the 119894th

hidden neuron is not considered Otherwise if 119904119894= 1 the 119894th

hidden neuron is retained and the sigmoid function is usedas its activation function

All components constituting a particle are randomlyinitialized within the range [0 1]

Step 2 (fitness evaluation) The corresponding outputweightsof each particle are computed according to (5) Then thefitness of each particle is evaluated by the root mean squareerror between the desired output and estimated output Toavoid the problem of overfitting the fitness evaluation isperformed on the validation dataset instead of the wholetraining dataset

119891 (P119894) = radic

sum119873V119895=1sum119870

119894=1120573119894119892 (w119894sdot w119895+ 119887119894) minus t119895

119873V

(7)

where119873V is the number of samples in the validation dataset

Step 3 (updating pBest119894and gBest) With the fitness values

of all particles in population the best previous position for119894th particle pBest

119894 and the global best position gBest of each

particle are updated As suggested in [15] neural networktends to have better generalization performance with theweights of smaller norm Therefore in this paper the fitnessvalue and the norm of output weights are considered togetherfor updating pBest

119894and gBest The updating strategy is as

follows

pBest119894=

p119894 (119891 (pBest

119894) minus 119891 (p

119894) gt 120578119891 (pBest

119894)) or (1003816100381610038161003816119891 (pBest119894) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (pBest119894) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wopBest

119894

10038171003817100381710038171003817)

pBest119894 else

gBest=

p119894 (119891 (gBest) minus 119891 (p

119894) gt 120578119891 (gBest)) or (1003816100381610038161003816119891 (gBest) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (gBest) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wogBest

10038171003817100381710038171003817)

gBest else

(8)

where 119891(p119894) 119891(pBest

119894) and 119891(gBest) are the fitness value of

the 119894th particlersquos position the best previous position of the119894th particle and the global best position of the swarm wo

119901119894

wopBest119894

and wogBest are the corresponding output weightsof the position of the 119894th particle the best previous positionof the 119894th particle and the global best position obtained byMP inverse By this updating criterion particles with smallerfitness values or smaller norms are more likely to be selectedas pBest

119894or gBest

Step 4 Calculate each particlersquos local attractor P119894and mean

best position mBest according to (4) and (5)

Step 5 Update particlersquos new position according to (3)Finally we repeat Step 2 to Step 5 until the maximum

number of iterations is reached Thus the network trained byELM with the optimized input weights and hidden biases isobtained and then the optimized network is applied to thebenchmark problems

4 International Journal of Aerospace Engineering

Table 1 Specification of 3 classification problems

Names Attributes Classes Number of samplesTraining data set Validation set Testing set

Satellite image 36 7 2661 1774 2000Image segmentation 19 7 1000 524 786Diabetes problem 8 2 346 230 192

Table 2 Mean training and testing accuracy of six methods on different classification problems

Algorithms Satellite image Image segmentation Diabetes problemTraining Testing Training Testing Training Testing

Q-ELM 0890 0876 0965 0960 0854 0842P-ELM 0884 0872 0938 0947 0836 0813G-ELM 0877 0869 0938 0934 0812 0804ELM 0873 0852 0930 0921 0783 0773SVM 0879 0870 0931 0916 0792 0781BP 0856 0849 0919 0892 0785 0776

x1

x2

xn

S1

S2

SN

h1

h2

hN

O1

Om

middot middot middot middot middot middot middot middot middot

middot middot middot

Figure 1 Single hidden layer feedforward network with 119904-variable

In the proposed algorithm each particle represents onepossible solution to the optimization problem and is a com-bination of components with different meaning and differentrange

All components of a particle are firstly initialized intocontinuous values between 0 and 1 Therefore before calcu-lating corresponding output weights and fitness evaluation inStep 2 they need to be converted to their real value

For the input weights and biases they are given by

119911119894119895= (119911

max119897

minus 119911min119897) 119901119894119895+ 119911

min119897 (9)

where 119911max119897

= 1 and 119911min119897= minus1 are the upper and lower bound

for input weights and hidden biasesFor 119904-parameters they are given by

119911119894119895= round (119901

119894119895) (10)

where round( ) is a function that rounds 119901119894119895to the nearest

integer (0 or 1 in this case) After the conversion of allcomponents of a particle the fitness of each particle can bethen evaluated

Table 3 Mean training time of six methods on different classifica-tion problems (in second)

Algorithms Satelliteimage

Imagesegmentation Diabetes problem

Q-ELM 17856 4536 1923P-ELM 19833 3908 2069G-ELM 27407 5426 2212ELM 0034 0027 0011SVM 954 361 143BP 3475 1692 306

5 Evaluation on SomeClassification Applications

In essence engine diagnosis is a pattern classification prob-lem Therefore in this section we firstly apply the developedQ-ELM on some real-world classification applications andcompare it with five existing algorithms They are PSOoptimized ELM (P-ELM) [18] genetic algorithm optimized(G-ELM) [18] standard ELM BP and SVM

The performances of all algorithms are tested on threebenchmark classification datasets which are listed in Table 1The training dataset validation dataset and testing datasetare randomly generated at each trial of simulations accordingto the corresponding numbers in Table 1 The performancesof these algorithms are listed in Tables 2 and 3

For the three optimized ELMs the population size is100 and the maximum number of iterations is 50 Theselection criteria for the P-ELMs and Q-ELM include thenorm of output weights as (8) while the selection criterionfor G-ELM considers only testing accuracy on validationdataset and does not include the norm of output weights assuggested in [19] Instead G-ELM incorporates Tikhonovrsquosregularization in the least squares algorithm to improve thenet generalization capability

International Journal of Aerospace Engineering 5

Fuel

B

G

D IC E F J

H

A

Fuel

1 2

13 1513 15

2622 3 4 44 5 65 68 8

A inletB fanC high pressure compressorD main combustorE high pressure turbine

F low pressure turbineG external ductH mixerI afterburnerJ nozzle

Figure 2 Schematic of studied turbine fan engine

In G-ELM the probability of crossover is 05 and themutation probability is 10 In Q-ELM and P-ELM theinertial weight is set to decrease from 12 to 04 linearlywith the iterations In Q-ELM the contraction-expansioncoefficient 120573 is set to decrease from 10 to 05 linearly with theiterations ELM methods are set with different initial hiddenneurons according to different applications

There are many variants of BP algorithm a faster BPalgorithm called Levenberg-Marquardt algorithm is used inour simulations And it has a very efficient implementationof Levenberg-Marquardt algorithm provided by MATLABpackage As SVM is binary classifier here the SVMalgorithmhas been expanded to ldquoOne versus Onerdquo Multiclass SVM toclassify themultiple fault classesThe parameters for the SVMare 119862 = 10 and 120574 = 2 [20] The imposed noise level119873

119897= 01

In order to account for the stochastic nature of thesealgorithms all of the six methods are run 10 times separatelyfor each classification problem and the results shown inTables 2 and 3 are the mean performance values in 10trials All simulations have been made in MATLAB R2008aenvironment running on a PCwith 25GHzCPUwith 2 coresand 2GBRAM

It can be concluded from Table 2 that in general theoptimized ELMmethod obtained better classification resultsthan ELM SVM and BP Q-ELM outperforms all the othermethods It obtains the best mean testing and training accu-racy on all these three classification problems This suggeststhat Q-ELM is a good choice for engine fault diagnosisapplication

Also it can be observed clearly that the training times ofthree optimized ELM methods are much more than the oth-ersThis mainly is because the optimized ELMmethods needto repeatedly execute some steps of parameters optimizationAnd ELM costs the least training time among these methods

6 Engine Diagnosis Applications

61 Engine Selection and Modeling The developed Q-ELMwas also applied to fault diagnostics of a gas turbine engine

Table 4 Description of two engine operating points

Operatingpoint

Fuel flowkgs

Environment setting parametersVelocity (Mach

number) Altitude (km)

A 1142 08 58B 1155 12 69

and was compared with other methods In this study wefocus on a two-shaft turbine fan engine with a mixer andan afterburner (for confidentiality reasons the engine typeis omitted) This engine is composed of several componentssuch as low pressure compressor (LPC) or fan high pressurecompressor (HPC) low pressure turbine (LPT) and highpressure turbine (HPT) and can be illustrated as shown inFigure 2

The gas turbine engine is susceptible to a lot of physicalproblems and these problems may result in the componentfault and reduce the component flow capacity and isentropicefficiencyThese component faults can result in the deviationsof some engine performance parameters such as pressuresand temperatures across different engine components It isa practical way to detect and isolate the default componentusing engine performance data

But the performance data of real engine with componentfault is very difficult to collect thus the component fault isusually simulated by enginemathematicalmodel as suggestedin [10 11]

We have already developed a performance model for thistwo-shaft turbine fan engine in MATLAB environment Inthis study we use the performance model to simulate thebehavior of the engine with or without component faults

The engine component faults can be simulated by isen-tropic efficiency deterioration of different engine compo-nents By implanting corresponding component defects withcertainmagnitude of isentropic efficiency deterioration to theengine performance model we can obtain simulated engineperformance parameter data with component fault

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of Aerospace Engineering 3

where y119894119895(119905 + 1) is the position of the 119894th particle with respect

to the 119895th dimension in iteration 119905 P119894119895is the local attractor of

119894th particle to the 119895th dimension and is defined as

P119894119895 (119905) = 120593119895 (119905) sdot pBest119894119895 (119905)

+ (1 minus 120593119895 (119905)) gBest119895 (119905)

(4)

mBest119895 (119905) =

1

119873119901

119873119901

sum

119894=1

pBest119894119895(119905) (5)

where 119873119901is the number of particles and pBest

119894represents

the best previous position of the 119894th particle gBest is theglobal best position of the particle swarm mBest is the meanbest position defined as the mean of all the best positions ofthe population 119896 119906 and 120593 are random number distributeduniformly in [0 1] respectively 120573 is called contraction-expansion coefficient and is used to control the convergencespeed of the algorithm

4 Extreme Learning MachineOptimized by QPSO

Because the output weights in ELM are calculated usingrandom input weights and hidden biases theremay exist a setof nonoptimal or even unnecessary input weights and hiddenneurons As a result ELM may need more hidden neuronsthan conventional gradient-based learning algorithms andlead to an ill-conditioned hidden outputmatrix which wouldcause worse generalization performance

In this section we proposed a new algorithm named Q-ELM to solve these problems Unlike some other optimizedELM algorithms our proposed algorithm optimizes not onlythe input weights and hidden biases using QPSO but also thestructure of the neural network (hidden layer neurons) Thedetailed steps of the proposed algorithm are as follows

Step 1 (initializing) Firstly we generate the population ran-domly Each particle in the population is constituted by a setof input weights hidden biases and 119904-variables

p119894= [11990811 119908

119873119870 1198871 119887

119870 119904

1 119904

119870] (6)

where 119904119894 119894 = 1 ℎ is a variable which defines the structure

of the network As illustrated in Figure 1 if 119904119894= 0 then the 119894th

hidden neuron is not considered Otherwise if 119904119894= 1 the 119894th

hidden neuron is retained and the sigmoid function is usedas its activation function

All components constituting a particle are randomlyinitialized within the range [0 1]

Step 2 (fitness evaluation) The corresponding outputweightsof each particle are computed according to (5) Then thefitness of each particle is evaluated by the root mean squareerror between the desired output and estimated output Toavoid the problem of overfitting the fitness evaluation isperformed on the validation dataset instead of the wholetraining dataset

119891 (P119894) = radic

sum119873V119895=1sum119870

119894=1120573119894119892 (w119894sdot w119895+ 119887119894) minus t119895

119873V

(7)

where119873V is the number of samples in the validation dataset

Step 3 (updating pBest119894and gBest) With the fitness values

of all particles in population the best previous position for119894th particle pBest

119894 and the global best position gBest of each

particle are updated As suggested in [15] neural networktends to have better generalization performance with theweights of smaller norm Therefore in this paper the fitnessvalue and the norm of output weights are considered togetherfor updating pBest

119894and gBest The updating strategy is as

follows

pBest119894=

p119894 (119891 (pBest

119894) minus 119891 (p

119894) gt 120578119891 (pBest

119894)) or (1003816100381610038161003816119891 (pBest119894) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (pBest119894) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wopBest

119894

10038171003817100381710038171003817)

pBest119894 else

gBest=

p119894 (119891 (gBest) minus 119891 (p

119894) gt 120578119891 (gBest)) or (1003816100381610038161003816119891 (gBest) minus 119891 (p119894)

1003816100381610038161003816 lt 120578119891 (gBest) 10038171003817100381710038171003817wo119901119894

10038171003817100381710038171003817lt10038171003817100381710038171003817wogBest

10038171003817100381710038171003817)

gBest else

(8)

where 119891(p119894) 119891(pBest

119894) and 119891(gBest) are the fitness value of

the 119894th particlersquos position the best previous position of the119894th particle and the global best position of the swarm wo

119901119894

wopBest119894

and wogBest are the corresponding output weightsof the position of the 119894th particle the best previous positionof the 119894th particle and the global best position obtained byMP inverse By this updating criterion particles with smallerfitness values or smaller norms are more likely to be selectedas pBest

119894or gBest

Step 4 Calculate each particlersquos local attractor P119894and mean

best position mBest according to (4) and (5)

Step 5 Update particlersquos new position according to (3)Finally we repeat Step 2 to Step 5 until the maximum

number of iterations is reached Thus the network trained byELM with the optimized input weights and hidden biases isobtained and then the optimized network is applied to thebenchmark problems

4 International Journal of Aerospace Engineering

Table 1 Specification of 3 classification problems

Names Attributes Classes Number of samplesTraining data set Validation set Testing set

Satellite image 36 7 2661 1774 2000Image segmentation 19 7 1000 524 786Diabetes problem 8 2 346 230 192

Table 2 Mean training and testing accuracy of six methods on different classification problems

Algorithms Satellite image Image segmentation Diabetes problemTraining Testing Training Testing Training Testing

Q-ELM 0890 0876 0965 0960 0854 0842P-ELM 0884 0872 0938 0947 0836 0813G-ELM 0877 0869 0938 0934 0812 0804ELM 0873 0852 0930 0921 0783 0773SVM 0879 0870 0931 0916 0792 0781BP 0856 0849 0919 0892 0785 0776

x1

x2

xn

S1

S2

SN

h1

h2

hN

O1

Om

middot middot middot middot middot middot middot middot middot

middot middot middot

Figure 1 Single hidden layer feedforward network with 119904-variable

In the proposed algorithm each particle represents onepossible solution to the optimization problem and is a com-bination of components with different meaning and differentrange

All components of a particle are firstly initialized intocontinuous values between 0 and 1 Therefore before calcu-lating corresponding output weights and fitness evaluation inStep 2 they need to be converted to their real value

For the input weights and biases they are given by

119911119894119895= (119911

max119897

minus 119911min119897) 119901119894119895+ 119911

min119897 (9)

where 119911max119897

= 1 and 119911min119897= minus1 are the upper and lower bound

for input weights and hidden biasesFor 119904-parameters they are given by

119911119894119895= round (119901

119894119895) (10)

where round( ) is a function that rounds 119901119894119895to the nearest

integer (0 or 1 in this case) After the conversion of allcomponents of a particle the fitness of each particle can bethen evaluated

Table 3 Mean training time of six methods on different classifica-tion problems (in second)

Algorithms Satelliteimage

Imagesegmentation Diabetes problem

Q-ELM 17856 4536 1923P-ELM 19833 3908 2069G-ELM 27407 5426 2212ELM 0034 0027 0011SVM 954 361 143BP 3475 1692 306

5 Evaluation on SomeClassification Applications

In essence engine diagnosis is a pattern classification prob-lem Therefore in this section we firstly apply the developedQ-ELM on some real-world classification applications andcompare it with five existing algorithms They are PSOoptimized ELM (P-ELM) [18] genetic algorithm optimized(G-ELM) [18] standard ELM BP and SVM

The performances of all algorithms are tested on threebenchmark classification datasets which are listed in Table 1The training dataset validation dataset and testing datasetare randomly generated at each trial of simulations accordingto the corresponding numbers in Table 1 The performancesof these algorithms are listed in Tables 2 and 3

For the three optimized ELMs the population size is100 and the maximum number of iterations is 50 Theselection criteria for the P-ELMs and Q-ELM include thenorm of output weights as (8) while the selection criterionfor G-ELM considers only testing accuracy on validationdataset and does not include the norm of output weights assuggested in [19] Instead G-ELM incorporates Tikhonovrsquosregularization in the least squares algorithm to improve thenet generalization capability

International Journal of Aerospace Engineering 5

Fuel

B

G

D IC E F J

H

A

Fuel

1 2

13 1513 15

2622 3 4 44 5 65 68 8

A inletB fanC high pressure compressorD main combustorE high pressure turbine

F low pressure turbineG external ductH mixerI afterburnerJ nozzle

Figure 2 Schematic of studied turbine fan engine

In G-ELM the probability of crossover is 05 and themutation probability is 10 In Q-ELM and P-ELM theinertial weight is set to decrease from 12 to 04 linearlywith the iterations In Q-ELM the contraction-expansioncoefficient 120573 is set to decrease from 10 to 05 linearly with theiterations ELM methods are set with different initial hiddenneurons according to different applications

There are many variants of BP algorithm a faster BPalgorithm called Levenberg-Marquardt algorithm is used inour simulations And it has a very efficient implementationof Levenberg-Marquardt algorithm provided by MATLABpackage As SVM is binary classifier here the SVMalgorithmhas been expanded to ldquoOne versus Onerdquo Multiclass SVM toclassify themultiple fault classesThe parameters for the SVMare 119862 = 10 and 120574 = 2 [20] The imposed noise level119873

119897= 01

In order to account for the stochastic nature of thesealgorithms all of the six methods are run 10 times separatelyfor each classification problem and the results shown inTables 2 and 3 are the mean performance values in 10trials All simulations have been made in MATLAB R2008aenvironment running on a PCwith 25GHzCPUwith 2 coresand 2GBRAM

It can be concluded from Table 2 that in general theoptimized ELMmethod obtained better classification resultsthan ELM SVM and BP Q-ELM outperforms all the othermethods It obtains the best mean testing and training accu-racy on all these three classification problems This suggeststhat Q-ELM is a good choice for engine fault diagnosisapplication

Also it can be observed clearly that the training times ofthree optimized ELM methods are much more than the oth-ersThis mainly is because the optimized ELMmethods needto repeatedly execute some steps of parameters optimizationAnd ELM costs the least training time among these methods

6 Engine Diagnosis Applications

61 Engine Selection and Modeling The developed Q-ELMwas also applied to fault diagnostics of a gas turbine engine

Table 4 Description of two engine operating points

Operatingpoint

Fuel flowkgs

Environment setting parametersVelocity (Mach

number) Altitude (km)

A 1142 08 58B 1155 12 69

and was compared with other methods In this study wefocus on a two-shaft turbine fan engine with a mixer andan afterburner (for confidentiality reasons the engine typeis omitted) This engine is composed of several componentssuch as low pressure compressor (LPC) or fan high pressurecompressor (HPC) low pressure turbine (LPT) and highpressure turbine (HPT) and can be illustrated as shown inFigure 2

The gas turbine engine is susceptible to a lot of physicalproblems and these problems may result in the componentfault and reduce the component flow capacity and isentropicefficiencyThese component faults can result in the deviationsof some engine performance parameters such as pressuresand temperatures across different engine components It isa practical way to detect and isolate the default componentusing engine performance data

But the performance data of real engine with componentfault is very difficult to collect thus the component fault isusually simulated by enginemathematicalmodel as suggestedin [10 11]

We have already developed a performance model for thistwo-shaft turbine fan engine in MATLAB environment Inthis study we use the performance model to simulate thebehavior of the engine with or without component faults

The engine component faults can be simulated by isen-tropic efficiency deterioration of different engine compo-nents By implanting corresponding component defects withcertainmagnitude of isentropic efficiency deterioration to theengine performance model we can obtain simulated engineperformance parameter data with component fault

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

4 International Journal of Aerospace Engineering

Table 1 Specification of 3 classification problems

Names Attributes Classes Number of samplesTraining data set Validation set Testing set

Satellite image 36 7 2661 1774 2000Image segmentation 19 7 1000 524 786Diabetes problem 8 2 346 230 192

Table 2 Mean training and testing accuracy of six methods on different classification problems

Algorithms Satellite image Image segmentation Diabetes problemTraining Testing Training Testing Training Testing

Q-ELM 0890 0876 0965 0960 0854 0842P-ELM 0884 0872 0938 0947 0836 0813G-ELM 0877 0869 0938 0934 0812 0804ELM 0873 0852 0930 0921 0783 0773SVM 0879 0870 0931 0916 0792 0781BP 0856 0849 0919 0892 0785 0776

x1

x2

xn

S1

S2

SN

h1

h2

hN

O1

Om

middot middot middot middot middot middot middot middot middot

middot middot middot

Figure 1 Single hidden layer feedforward network with 119904-variable

In the proposed algorithm each particle represents onepossible solution to the optimization problem and is a com-bination of components with different meaning and differentrange

All components of a particle are firstly initialized intocontinuous values between 0 and 1 Therefore before calcu-lating corresponding output weights and fitness evaluation inStep 2 they need to be converted to their real value

For the input weights and biases they are given by

119911119894119895= (119911

max119897

minus 119911min119897) 119901119894119895+ 119911

min119897 (9)

where 119911max119897

= 1 and 119911min119897= minus1 are the upper and lower bound

for input weights and hidden biasesFor 119904-parameters they are given by

119911119894119895= round (119901

119894119895) (10)

where round( ) is a function that rounds 119901119894119895to the nearest

integer (0 or 1 in this case) After the conversion of allcomponents of a particle the fitness of each particle can bethen evaluated

Table 3 Mean training time of six methods on different classifica-tion problems (in second)

Algorithms Satelliteimage

Imagesegmentation Diabetes problem

Q-ELM 17856 4536 1923P-ELM 19833 3908 2069G-ELM 27407 5426 2212ELM 0034 0027 0011SVM 954 361 143BP 3475 1692 306

5 Evaluation on SomeClassification Applications

In essence engine diagnosis is a pattern classification prob-lem Therefore in this section we firstly apply the developedQ-ELM on some real-world classification applications andcompare it with five existing algorithms They are PSOoptimized ELM (P-ELM) [18] genetic algorithm optimized(G-ELM) [18] standard ELM BP and SVM

The performances of all algorithms are tested on threebenchmark classification datasets which are listed in Table 1The training dataset validation dataset and testing datasetare randomly generated at each trial of simulations accordingto the corresponding numbers in Table 1 The performancesof these algorithms are listed in Tables 2 and 3

For the three optimized ELMs the population size is100 and the maximum number of iterations is 50 Theselection criteria for the P-ELMs and Q-ELM include thenorm of output weights as (8) while the selection criterionfor G-ELM considers only testing accuracy on validationdataset and does not include the norm of output weights assuggested in [19] Instead G-ELM incorporates Tikhonovrsquosregularization in the least squares algorithm to improve thenet generalization capability

International Journal of Aerospace Engineering 5

Fuel

B

G

D IC E F J

H

A

Fuel

1 2

13 1513 15

2622 3 4 44 5 65 68 8

A inletB fanC high pressure compressorD main combustorE high pressure turbine

F low pressure turbineG external ductH mixerI afterburnerJ nozzle

Figure 2 Schematic of studied turbine fan engine

In G-ELM the probability of crossover is 05 and themutation probability is 10 In Q-ELM and P-ELM theinertial weight is set to decrease from 12 to 04 linearlywith the iterations In Q-ELM the contraction-expansioncoefficient 120573 is set to decrease from 10 to 05 linearly with theiterations ELM methods are set with different initial hiddenneurons according to different applications

There are many variants of BP algorithm a faster BPalgorithm called Levenberg-Marquardt algorithm is used inour simulations And it has a very efficient implementationof Levenberg-Marquardt algorithm provided by MATLABpackage As SVM is binary classifier here the SVMalgorithmhas been expanded to ldquoOne versus Onerdquo Multiclass SVM toclassify themultiple fault classesThe parameters for the SVMare 119862 = 10 and 120574 = 2 [20] The imposed noise level119873

119897= 01

In order to account for the stochastic nature of thesealgorithms all of the six methods are run 10 times separatelyfor each classification problem and the results shown inTables 2 and 3 are the mean performance values in 10trials All simulations have been made in MATLAB R2008aenvironment running on a PCwith 25GHzCPUwith 2 coresand 2GBRAM

It can be concluded from Table 2 that in general theoptimized ELMmethod obtained better classification resultsthan ELM SVM and BP Q-ELM outperforms all the othermethods It obtains the best mean testing and training accu-racy on all these three classification problems This suggeststhat Q-ELM is a good choice for engine fault diagnosisapplication

Also it can be observed clearly that the training times ofthree optimized ELM methods are much more than the oth-ersThis mainly is because the optimized ELMmethods needto repeatedly execute some steps of parameters optimizationAnd ELM costs the least training time among these methods

6 Engine Diagnosis Applications

61 Engine Selection and Modeling The developed Q-ELMwas also applied to fault diagnostics of a gas turbine engine

Table 4 Description of two engine operating points

Operatingpoint

Fuel flowkgs

Environment setting parametersVelocity (Mach

number) Altitude (km)

A 1142 08 58B 1155 12 69

and was compared with other methods In this study wefocus on a two-shaft turbine fan engine with a mixer andan afterburner (for confidentiality reasons the engine typeis omitted) This engine is composed of several componentssuch as low pressure compressor (LPC) or fan high pressurecompressor (HPC) low pressure turbine (LPT) and highpressure turbine (HPT) and can be illustrated as shown inFigure 2

The gas turbine engine is susceptible to a lot of physicalproblems and these problems may result in the componentfault and reduce the component flow capacity and isentropicefficiencyThese component faults can result in the deviationsof some engine performance parameters such as pressuresand temperatures across different engine components It isa practical way to detect and isolate the default componentusing engine performance data

But the performance data of real engine with componentfault is very difficult to collect thus the component fault isusually simulated by enginemathematicalmodel as suggestedin [10 11]

We have already developed a performance model for thistwo-shaft turbine fan engine in MATLAB environment Inthis study we use the performance model to simulate thebehavior of the engine with or without component faults

The engine component faults can be simulated by isen-tropic efficiency deterioration of different engine compo-nents By implanting corresponding component defects withcertainmagnitude of isentropic efficiency deterioration to theengine performance model we can obtain simulated engineperformance parameter data with component fault

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of Aerospace Engineering 5

Fuel

B

G

D IC E F J

H

A

Fuel

1 2

13 1513 15

2622 3 4 44 5 65 68 8

A inletB fanC high pressure compressorD main combustorE high pressure turbine

F low pressure turbineG external ductH mixerI afterburnerJ nozzle

Figure 2 Schematic of studied turbine fan engine

In G-ELM the probability of crossover is 05 and themutation probability is 10 In Q-ELM and P-ELM theinertial weight is set to decrease from 12 to 04 linearlywith the iterations In Q-ELM the contraction-expansioncoefficient 120573 is set to decrease from 10 to 05 linearly with theiterations ELM methods are set with different initial hiddenneurons according to different applications

There are many variants of BP algorithm a faster BPalgorithm called Levenberg-Marquardt algorithm is used inour simulations And it has a very efficient implementationof Levenberg-Marquardt algorithm provided by MATLABpackage As SVM is binary classifier here the SVMalgorithmhas been expanded to ldquoOne versus Onerdquo Multiclass SVM toclassify themultiple fault classesThe parameters for the SVMare 119862 = 10 and 120574 = 2 [20] The imposed noise level119873

119897= 01

In order to account for the stochastic nature of thesealgorithms all of the six methods are run 10 times separatelyfor each classification problem and the results shown inTables 2 and 3 are the mean performance values in 10trials All simulations have been made in MATLAB R2008aenvironment running on a PCwith 25GHzCPUwith 2 coresand 2GBRAM

It can be concluded from Table 2 that in general theoptimized ELMmethod obtained better classification resultsthan ELM SVM and BP Q-ELM outperforms all the othermethods It obtains the best mean testing and training accu-racy on all these three classification problems This suggeststhat Q-ELM is a good choice for engine fault diagnosisapplication

Also it can be observed clearly that the training times ofthree optimized ELM methods are much more than the oth-ersThis mainly is because the optimized ELMmethods needto repeatedly execute some steps of parameters optimizationAnd ELM costs the least training time among these methods

6 Engine Diagnosis Applications

61 Engine Selection and Modeling The developed Q-ELMwas also applied to fault diagnostics of a gas turbine engine

Table 4 Description of two engine operating points

Operatingpoint

Fuel flowkgs

Environment setting parametersVelocity (Mach

number) Altitude (km)

A 1142 08 58B 1155 12 69

and was compared with other methods In this study wefocus on a two-shaft turbine fan engine with a mixer andan afterburner (for confidentiality reasons the engine typeis omitted) This engine is composed of several componentssuch as low pressure compressor (LPC) or fan high pressurecompressor (HPC) low pressure turbine (LPT) and highpressure turbine (HPT) and can be illustrated as shown inFigure 2

The gas turbine engine is susceptible to a lot of physicalproblems and these problems may result in the componentfault and reduce the component flow capacity and isentropicefficiencyThese component faults can result in the deviationsof some engine performance parameters such as pressuresand temperatures across different engine components It isa practical way to detect and isolate the default componentusing engine performance data

But the performance data of real engine with componentfault is very difficult to collect thus the component fault isusually simulated by enginemathematicalmodel as suggestedin [10 11]

We have already developed a performance model for thistwo-shaft turbine fan engine in MATLAB environment Inthis study we use the performance model to simulate thebehavior of the engine with or without component faults

The engine component faults can be simulated by isen-tropic efficiency deterioration of different engine compo-nents By implanting corresponding component defects withcertainmagnitude of isentropic efficiency deterioration to theengine performance model we can obtain simulated engineperformance parameter data with component fault

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

6 International Journal of Aerospace Engineering

Table 5 Single and multiple fault cases

Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8LPC F mdash mdash mdash F F F FHPC mdash F mdash mdash F mdash F mdashLPT mdash mdash F mdash mdash F F FHPT mdash mdash mdash F mdash mdash mdash F

The engine operating point which is primarily definedby fuel flow rate has significant effect on the engine per-formance Therefore engine fault diagnostics should beconducted on a specified operating point In this study westudy two different engine operation points The fuel flowand environment setting parameters are listed in Table 4Theengine defect diagnostics was conducted on these operatingpoints separately

62 Generating Component Fault Dataset In this studydifferent engine component fault cases were considered asthe eight classes shown in Table 5The first four classes repre-sent four single fault casesThey are low pressure compressor(LPC) fault case high pressure compressor (HPC) fault caselow pressure turbine (LPT) fault case and high pressureturbine (HPT) fault case Each class has only one componentfault and is represented with an ldquoFrdquo Class 5 and class 6 aredual fault cases they are LPC + HPC fault case and LPC +LPT fault case And the last two classes are triple fault casesThey are LPC +HPC + LPT fault case and LPC + LPT +HPTfault case

For each single fault cases listed in Table 2 50 instanceswere generated by randomly selecting corresponding com-ponent isentropic efficiency deterioration magnitude withinthe range 1ndash5 For dual fault cases 100 instances weregenerated for each class by randomly setting the isentropicefficiency deterioration of two faulty components within therange 1ndash5 simultaneously For triple fault cases each casegenerated 300 instances using the same method

Thus we have 200 single fault data instances 800multiplefault instances and one healthy state instance on eachoperating point condition These instances are then dividedinto training dataset validation dataset and testing dataset

In this study for each operating point condition 100single fault case datasets (randomly select 25 instances foreach single fault class) 400 multiple fault case datasets(randomly select 50 instances for each dual fault case and150 instances for each triple fault case) and one healthy stateinstance were used as training dataset 60 single fault casedatasets (randomly select 15 instances for each single faultclass) and 240 multiple fault case datasets (randomly select30 instances for each dual fault case and 90 instances for eachtriple fault case) were used as testing dataset And the left 200instances were used as validation dataset

The input parameters of the training validation and testdataset are the relative deviations of simulated engine per-formance parameters with component fault to the ldquohealthyrdquoengine parameters And these parameters include low pres-sure rotor rotational speed 119899

1 high pressure rotor rotational

speed 1198992 total pressure and total temperature after LPC 119901lowast

22

119879lowast

22 total pressure and total temperature after HPC 119901lowast

3 119879lowast3

total pressure and total temperature after HPT 119901lowast44 119879lowast44 and

total pressure and total temperature after LPT 119901lowast5 119879lowast5 In this

study all the input parameters have been normalized into therange [0 1]

In real engine applications there inevitably exist sensornoises Therefore all input data are contaminated with mea-surement noise to simulate real engine sensory signals as thefollowing equation

119883119899= 119883 + 119873

119897sdot 120590 sdot rand (1 119899) (11)

where 119883 is clean input parameter 119873119897denotes the imposed

noise level and 120590 is the standard deviation of dataset

63 Engine Component Fault Diagnostics by 6 Methods Theproposed engine diagnostic method using Q-ELM is demon-strated with single and multiple fault cases and comparedwith P-ELM G-ELM ELM BP and SVM

631 Parameter Settings Theparameter settings are the sameas in Section 5 except that all ELM methods are set with 100initial hidden neurons All the six methods are run 10 timesseparately for each condition and the results shown in Tables6 7 and 8 are the mean performance values

632 Comparisons of the Six Methods The performancesof the 6 methods were compared on both condition A andcondition B Tables 6 and 7 list the mean classificationaccuracies of the 6 methods on each component fault classTable 8 lists the mean training time of each method

It can be seen from Table 6 (condition A) that Q-ELMobtained the best results on C1 C4 C6 C7 and C8 while P-ELMperformed the best on C2 C3 and C5 FromTable 7 wecan see that P-ELM performs the best on one single fault case(C4) and Q-ELM obtains the highest classification accuracyon all the left test cases

In general the optimized ELMs obtained better classifica-tion results than ELM SVM and BP on both single fault andmultiple fault casesThis conclusion can be also demonstratedin Figures 3(a) and 3(b) where the mean classificationaccuracies obtained by any optimized ELM methods arehigher than that obtained by the other three methods It canbe also observed that the classification performance of ELMis on parwith that of SVMand is better than that of BPDue tothe nonlinear nature of gas turbine engine the multiple faultcases are more difficult to diagnose than single fault cases Itcan be seen from Tables 3 and 4 that the mean classificationaccuracy of multiple fault diagnostics is lower that of singlefault diagnostics cases

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of Aerospace Engineering 7

Table 6 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9333 9355 9175 9045 9005 8856 8526 8495P-ELM 9048 9361 9232 8923 9044 8773 8305 8260G-ELM 9235 9304 9017 8936 8824 8803 8469 8350ELM 8909 8756 8752 8690 8527 8442 8116 7833SVM 8903 8867 8496 8515 8568 8502 8054 7963BP 8729 8645 8237 8363 8057 7869 7220 7324

Table 7 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 9345 9207 9153 9107 9145 9019 8626 8526P-ELM 9257 9158 9151 9148 8963 8970 8618 8444G-ELM 9331 9174 9053 9106 9123 8865 8575 8283ELM 8921 8833 8697 8709 8595 8355 8024 7706SVM 9066 8914 8714 8652 8568 8212 8142 7654BP 8625 8451 8255 8069 7605 7463 7304 7095

Condition ACondition B

83

84

85

86

87

88

89

90

91

92

93

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

Condition ACondition B

72

74

76

78

80

82

84

86

88

90

Mea

n ac

cura

cy (

)

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 3 Mean classification accuracies of the 6 methods of different conditions

Table 8 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1954 2308 2077 00113 0147 296Condition B 2023 2354 2183 00108 0179 313

Notice that the two mean accuracy curves on bothFigures 3(a) and 3(b) are very close to each other we canconclude that engine operating point has no obvious effecton classification accuracies of all methods

The training times of three optimized ELM methods aremuch more than the others Much of training time of theoptimized ELM is spent on evaluating all the individualsiteratively

633 Comparisons with Fewer Input Parameters InSection 632 we train ELM and other methods using datasetwith 10 input parameters But in real applications the gasturbine engine may be equipped with only a few numbers ofsensorsThus we have fewer input parameters In this sectionwe reduce the input parameters from 10 to 6 they are 119899

1 1198992

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

8 International Journal of Aerospace Engineering

Table 9 Mean classification accuracy of operating point A condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 920857 918108 910178 900239 888211 863334 830490 817370P-ELM 900024 908502 911142 881223 867068 852295 805144 801667G-ELM 890763 910038 877888 882232 854190 839324 796553 783395ELM 858684 848761 834979 823035 816548 782046 761726 747309SVM 839047 858948 844922 832766 811377 746795 745938 741796BP 832354 822824 809588 798117 723411 719088 709969 684934

Table 10 Mean classification accuracy of operating point B condition

Method Fault classesC1 C2 C3 C4 C5 C6 C7 C8

Q-ELM 916970 903428 898130 893616 877345 864981 846418 836606P-ELM 915233 901717 896428 891923 865645 863305 824815 835021G-ELM 889153 876171 871090 866763 850338 848484 811512 802104ELM 842722 839392 840175 829732 809403 805231 770267 759607SVM 841035 828468 833550 829361 802821 811347 765557 746450BP 806848 804343 799449 790281 748724 710306 691692 681630

80

82

84

86

88

90

92

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(a) Mean classification accuracies on single fault cases of the 6methods

70

72

74

76

78

80

82

84

86

Mea

n ac

cura

cy (

)

Condition ACondition B

P-ELM G-ELM ELM SVM BPQ-ELMMethods

(b) Mean classification accuracies on multiple fault cases of the 6methods

Figure 4 Mean classification accuracies of the 6 methods of different conditions

Table 11 Mean training time on training data

Q-ELM G-ELM P-ELM ELM SVM BPCondition A 1740 2126 1875 00094 0126 235Condition B 1879 2055 1906 00077 0160 247

119901lowast

22 119901lowast3 119879lowast44 and 119901lowast

5 We trained all the methods with the

same training dataset with only 6 input parameters and theresults are listed in Tables 9ndash11

Compared with Tables 6 and 7 we can see that the num-ber of input parameters has great impact on diagnostics accu-racy The results in Tables 9 and 10 are generally lower thanresults in Tables 6 and 7

The optimized ELM methods still show better perfor-mance than the others this conclusion can be demonstratedin Figures 4(a) and 4(b) Q-ELM obtained the highestaccuracies in all cases except C3 in Table 6 and it attained thebest results in all cases in Table 10

The good results obtained by our method indicate thatthe selection criteria which include both the fitness value in

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of Aerospace Engineering 9

Q-ELMP-ELMG-ELM

078

079

08

081

082

083

084

085

086

087

088M

ean

accu

racy

10 20 30 40 500Iterations

(a)

Q-ELMP-ELMG-ELM

075

076

077

078

079

08

081

082

083

084

085

Mea

n ac

cura

cy

10 20 30 40 500Iterations

(b)

Figure 5 Mean evolution of the RMSE of the three methods (a) fault class C5 and (b) fault class C7

validation dataset and the norm of output weights help thealgorithms to obtain better generalization performance

In order to evaluate the proposed method in depth themean evolution of the accuracy on validation dataset of 10trials by three optimized ELMmethods on fault class C5 andC7 cases in condition B is plotted in Figure 5

It can be observed from Figure 5 that Q-ELM has muchbetter convergence performance than the other two methodsand obtains the best mean accuracy after 50 iterations P-ELM is better than G-ELM In fact Q-ELM can achieve thesame accuracy level as G-ELM within only half of the totaliterations for these two cases

The main reason for high classification rate by ourmethod is mainly because the quantum mechanics helpsQPSO to searchmore effectively in search space thus outper-forming P-ELMs and G-ELM in converging to a better result

7 Conclusions

In this paper a new hybrid learning approach for SLFNnamed Q-ELM was proposed The proposed algorithm opti-mizes both the neural network parameters (input weights andhidden biases) and hidden layer structure using QPSO Andthe output weights are calculated by Moore-Penrose gener-alized inverse like in the original ELM In the optimizing ofnetwork parameters not only the RMSE on validation datasetbut also the norm of the output weights is considered to beincluded in the selection criteria

To validate the performance of the proposed Q-ELM weapplied it to some real-world classification applications anda gas turbine fan engine fault diagnostics and compare itwith some state-of-the-art methods Results show that ourmethod obtains the highest classification accuracy in mosttest cases and show great advantage than the other optimized

ELM methods SVM and BP This advantage becomes moreprominent when the number of input parameters in trainingdataset is reduced which suggests that our method is a moresuitable tool for real engine fault diagnostics application

Conflict of Interests

The authors declare no conflict of interests regarding thepublication of this paper

References

[1] X Yang W Shen S Pang B Li K Jiang and Y Wang ldquoAnovel gas turbine engine health status estimation method usingquantum-behaved particle swarm optimizationrdquoMathematicalProblems in Engineering vol 2014 Article ID 302514 11 pages2014

[2] Z N S Vanini K Khorasani and N Meskin ldquoFault detectionand isolation of a dual spool gas turbine engine using dynamicneural networks and multiple model approachrdquo InformationSciences vol 259 pp 234ndash251 2014

[3] T Brotherton and T Johnson ldquoAnomaly detection for advancedmilitary aircraft using neural networksrdquo in Proceedings of theIEEE Aerospace Conference vol 6 pp 3113ndash3123 IEEE Big SkyMont USA March 2001

[4] S Ogaji Y G Li S Sampath and R Singh ldquoGas path fault diag-nosis of a turbofan engine from transient data using artificialneural networksrdquo in Proceedings of the 2003 ASME Turbine andAeroengine Congress ASME Paper No GT2003-38423 AtlantaGa USA June 2003

[5] L C Jaw ldquoRecent advancements in aircraft engine health man-agement (EHM) technologies and recommendations for thenext steprdquo in Proceedings of the 50th ASME International GasTurbine ampAeroengine Technical Congress Reno Nev USA June2005

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

10 International Journal of Aerospace Engineering

[6] S Osowski K Siwek and T Markiewicz ldquoMLP and SVM net-worksmdasha comparative studyrdquo in Proceedings of the 6th NordicSignal Processing Symposium (NORSIG rsquo04) pp 37ndash40 EspooFinland June 2004

[7] M Zedda and R Singh ldquoFault diagnosis of a turbofan engineusing neural-networks a quantitative approachrdquo in Proceedingsof the 34thAIAA ASME SAE ASEE Joint Propulsion Conferenceand Exhibit AIAA 98-3602 Cleveland Ohio USA July 1998

[8] C Romessis A Stamatis and K Mathioudakis ldquoA parametricinvestigation of the diagnostic ability of probabilistic neuralnetworks on turbofan enginesrdquo in Proceedings of the ASMETurbo Expo 2001 Power for Land Sea and Air Paper no 2001-GT-0011 New Orleans La USA June 2001

[9] A J Volponi H DePold R Ganguli and C Daguang ldquoTheuse of kalman filter and neural network methodologies in gasturbine performance diagnostics a comparative studyrdquo Journalof Engineering for Gas Turbines and Power vol 125 no 4 pp917ndash924 2003

[10] S-M Lee T-S Roh and D-W Choi ldquoDefect diagnostics ofSUAV gas turbine engine using hybrid SVM-artificial neuralnetwork methodrdquo Journal of Mechanical Science and Technol-ogy vol 23 no 1 pp 559ndash568 2009

[11] S-M Lee W-J Choi T-S Roh and D-W Choi ldquoA study onseparate learning algorithm using support vector machine fordefect diagnostics of gas turbine enginerdquo Journal of MechanicalScience and Technology vol 22 no 12 pp 2489ndash2497 2008

[12] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[13] G-B Huang Z Bai L L C Kasun and C M Vong ldquoLocalreceptive fields based extreme learning machinerdquo IEEE Com-putational Intelligence Magazine vol 10 no 2 pp 18ndash29 2015

[14] G-B Huang ldquoWhat are extreme learning machines Fillingthe gap between Frank Rosenblattrsquos dream and John Von Neu-mannrsquos puzzlerdquo Cognitive Computation vol 7 no 3 pp 263ndash278 2015

[15] Q-Y Zhu A K Qin P N Suganthan and G-B Huang ldquoEvo-lutionary extreme learning machinerdquo Pattern Recognition vol38 no 10 pp 1759ndash1763 2005

[16] J Sun C-H Lai W-B Xu Y Ding and Z Chai ldquoA modifiedquantum-behaved particle swarm optimizationrdquo in Proceedingsof the 7th International Conference on Computational Science(ICCS rsquo07) pp 294ndash301 Beijing China May 2007

[17] M Xi J Sun and W Xu ldquoAn improved quantum-behavedparticle swarm optimization algorithm with weighted meanbest positionrdquo Applied Mathematics and Computation vol 205no 2 pp 751ndash759 2008

[18] TMatias F Souza R Araujo andCH Antunes ldquoLearning of asingle-hidden layer feedforward neural network using an opti-mized extreme learningmachinerdquoNeurocomputing vol 129 pp428ndash436 2014

[19] F Han H-F Yao and Q-H Ling ldquoAn improved evolutionaryextreme learning machine based on particle swarm optimiza-tionrdquo Neurocomputing vol 116 pp 87ndash93 2013

[20] D-H Seo T-S Roh and D-W Choi ldquoDefect diagnostics of gasturbine engine using hybrid SVM-ANNwith module system inoff-design conditionrdquo Journal of Mechanical Science and Tech-nology vol 23 no 3 pp 677ndash685 2009

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of