Fault Diagnostics and Prognostics methods in Nuclear Power ... · Fault Diagnostics and Prognostics...

Preview:

Citation preview

Fault Diagnostics and Prognostics

methods in Nuclear Power Plants

Piero Baraldi

Politecnico di Milano

22Piero Baraldi

Normal

operation

Remaining Useful Life

(RUL)

t

1x

t

2x

Detect Diagnose Predict

Nuclear Power Plant Component)

c2c1 c3

Measuredsignals

Anomalous

operationMalfunctioning type

(classes)

Fault Diagnostics and Prognostics

Data-driven fault diagnostics

4444Piero Baraldi

In This Lecture

1. Fault diagnostics: what is it?

2. Procedural steps for developing a fault diagnostic system

3. Artificial Neural Networks

5555Piero Baraldi

FAULT

DIAGNOSIS

(correct assignment)

f1

f2

Normal operation

Measured

signals

f1

f2

Forcing

functions

FAULT

DETECTION

early recognition

Fault diagnostics: What is it 5

Transient

Industrial

system

6666Piero Baraldi

Fault diagnostics: transient classification

Measured signals

Diagnostic

System

Fault Class 1

Fault Class 2

Fault Class c

6

Classifier

Fault Class 1

Fault Class 2

Fault Class c

x1

x2

x3

X1(t)

X2(t)

X3(t)

c

t

Transient

7777Piero Baraldi

In This Lecture

1. Fault diagnostics: what is it?

2. Procedural steps for developing a fault diagnostic system

3. Objectives of a fault diagnostic system

4. Unsupervised classification methods

5. Supervised classification methods

8888Piero Baraldi

Empirical Approach to Fault Classification

1. Identify fault/degradation classes:

▪ System Analysis (FMECA, Event Tree Analysis, …)

▪ Good engineering sense of practice

▪ Analysis of the historical data

{C1 C2}

1x

2x

1x = signal 1

= signal 22x

8

9999Piero Baraldi

Empirical Approach to Fault Classification

1. Identify fault/degradation classes

2. Collect data couples: <measurements, class>

▪ Historical data

▪ Fault transients in industrial systems may be rare

▪ Information on the class of the fault originating the transient is not always available

9

C1

C2

1x

2x

1x = representative signal 1

= representative signal 22x

10101010Piero Baraldi

Example: turbine transients in a Nuclear Power Plant

HealthyType 1

malfunctioning

Type 2

malfunctioning

Type 3

malfunctioning

Type 4

malfunctioning

Available data

• 5 signals (operational conditions) + 7 signals (turbine behaviour)

• 149 shut-down transients from a nuclear power plant

• Types of anomalies are unknown

11111111Piero Baraldi

Empirical Approach to Fault Classification

1. Identify fault classes

2. Collect data couples: <measurements, fault class>

▪ Historical plant data

▪ Fault transients in industrial systems may be rare

▪ Information on the class of the fault originating the transient is not always available

▪ Physics-based simulator

11

C1

C2

1x

2x

1x = representative signal 1

= representative signal 22x

12121212Piero Baraldi

Empirical Approach to Fault Classification

1. Identify fault classes

2. Collect data couples: <measurements, fault class>

3. Develop the empirical classification algorithm using as training set the collected data couples: <measurements, fault classes >

12

C1

C2

1x

2x

1x = representative signal 1

= representative signal 22x

Measured signals

Classifier

Fault Class 1

Fault Class 2

Fault Class c

X1(t)

X2(t)

X3(t)

Data-driven fault prognostics approaches

• Machine learning and data mining algorithmso Support Vector Machineso K-Nearest Neighbour Classifiero Fuzzy similarity-based approacheso Artificial Neural Networkso Neurofuzzy Systemso Relevant Vector Machineso …

14141414Piero Baraldi

o Prognostics

o What is it?

o Sources of information

o Prognostic approaches

15151515Piero Baraldi

Fault prognostics: What is it?

Remaining Useful Life

(RUL)

t

1x

t

2x

Predict

16161616Piero Baraldi

o Prognostics

o What is it?

o Sources of information

o Prognostic approaches

17171717Piero Baraldi

• Threshold of failure

• Current degradation

trajectory

• External/operational

conditions

Life durations of a

set of

similar components

• Degradation

trajectories of similar

components

• A physics-based model

of the degradation

Prognostics: Sources of Information

Piero Baraldi

Sources of Information: An example 18

Component: turbine blade

Degradation mechanism: creeping

19

Component: turbine blade

Degradation mechanism: creeping

Degradation indicator: blade elongation

Prognostics: an example

Length(t) – initial length

initial length tx

20

Sources of information for prognostics

• Life durations of a set of similar components which have already failed:

𝑇1, 𝑇2, … , 𝑇𝑛

20

Failure time (days)

21

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure:

21

Fault Initiation

x

t

thx

ft

thx

22

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure:

22

Fault Initiation

x

t

thx

ft

thx

«A blade is discarded when the elongation, x, reaches 1.5%»

23

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory): 𝑧1, 𝑧2 … , 𝑧𝑘

23

tz

tk

Threshold

of failure

24

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory): 𝑧1, 𝑧2 … , 𝑧𝑘

24

tz

t

Elongation measurements = past evolution of the degradation indicator

k

Threshold

of failure

25

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

25

Fault Initiation

)(tz

t

26

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

Past, present and future time evolution of: ,...,,...,, 121 kk uuuu

26

27

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

Past, present and future time evolution of: ,...,,...,, 121 kk uuuu

27

𝒖𝟏 = T = temperature

𝒖𝟐 =θr = rotational speed

28

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

28

29

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

29

30

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

• A physics-based model of the degradation process

30

1111 ,,,..., kkkkk uxxfx

31

Sources of information for prognostics

• Life durations of a set of similar components which have already failed

• Threshold of failure

• A sequence of observations collected from the degradation initiation to the

present time (current degradation trajectory)

• Degradation trajectories of similar components

• Information on external/operational conditions (past – present - future)

• Measurement equation

• A physics-based model of the degradation process

31

Norton law for creep growth

x = blade elongation

T = temperature

φ = Kθr2 = applied stress

θr = rotational speed

A, Q, n = equipment inherent parameters

n

RT

QA

dt

dx

-exp

Arrhenius law

External/operational conditions

32

o Prognostics

o What is it?

o Sources of information

o Prognostic approaches

33

Data-

DrivenExperience-

Based

Model-

Based

• Threshold of failure

• Current degradation

trajectory

• External/operational

conditions

Life durations of a

set of

similar components

• Degradation

trajectories of similar

components

• A physics-based model

of the degradation

Prognostic approaches

34

Data-

DrivenExperience-

Based

Model-

Based

• Threshold of failure

• Current degradation

trajectory

• External/operational

conditions

Life durations of a

set of

similar components

• Degradation

trajectories of similar

components

• Measurement equation

• A physics-based model

of the degradation

Prognostic approaches

Hybrid

Data driven fault prognostics

o When?

❖ Understanding of first principles of component degradation is not comprehensive

❖ System sufficiently complex that developing an accurate physical model is prohibitively expensive

o Advantages:

❖ Quick and cheap to be developed

❖ They can provide system wide-coverage

o Disadvantages

❖ They require a substantial amount of data for training:

Degradation data are difficult to collect for new or highly reliable systems (running systems to failure can be a lengthy and costly process)

Quality of the data can be low due to difficulty in the signal measurements

Artificial Neural Networks

How to build an empirical

model

using input/output data?

The problem

x1(1), x2

(1) , x3(1) | t1

(1), t2(1)

x1(2), x2

(2) , x2(2) | t1

(2), t2(2)

What are Artificial Neural Networks (ANN)?

Multivariate, non linear interpolators capable of

reconstructing the underlying complex I/O nonlinear

relations by combining multiple simple functions.

An artificial neural network is composed by several

simple computational units (also called nodes or

neurons)

What Are (Artificial) Neural Networks?

f

y1 y2 … yn

u

The computational unit

𝑓′ = 𝑓 1 − 𝑓

+

ff

y represented

asor

y1 y2 … yn

y1 y2 … yn

y1 y2 … yn

u uu

The computational unit: other representations

ymj = wmjuj

Connection between nodes i and j

j

m

uj

wmj

ymj

The weighted connection

wmj = synapsis weight

between nodes j

and m

An artificial neural network is composed by several simple computational units

(also called nodes or neurons) directionally connected by weighted

connections

An artificial neural network is composed by several simple computational

units (also called nodes or neurons) directionally connected by weighted

connections organized in a proper architecture

An example of ANN architecture

The multilayered feedforward ANN

INPUT

OUTPUTNumber of connections

117+85=117

ARTIFICIAL NEURAL NETWORKS

INPUT LAYER:

each k-th node (k=1, 2, …, ni) receives the value of the k-th component

of the input vector and delivers the same valuex

HIDDEN LAYER:

each j-th node (j=1, 2, …, nh) receives

in

k

jjkk wwx1

0

and delivers

in

k

jjkk

hh

j wwxfu1

0 with 𝑓ℎ typically sigmoidal

Multilayered Feedforward NN

OUTPUT LAYER:

each l-th node (l=1, 2, …, no) receives

hn

j

llj

h

j wwu1

0

and delivers

hn

j

llj

h

j

o

l wwufu1

0 f typically linear or sigmoidal

Forward Calculation (hidden-output)

𝑜

49

Human brain is a densely interconnected network of approximately 1011 neurons, each connected to, on average, 104 others.

Neuron activity is excited or inhibitedthrough connections to other neurons.

Biological Motivation

Biological Motivation: the neuron

The dendritesprovide input signals to the cell.

The axon sends output signals to cell 2 via the axon terminals. These axon terminals merge with the dendrites of cell 2.

What is the mathematical basis

behind Artificial Neural Networks?

Neural networks are universal approximators of multivariate

non-linear functions.

KOLMOGOROV (1957):

For any real function continuous in [0,1]n, n2, there

exist n(2n+1) functions ml() continuous in [0,1] such that

),...,,( 21 nxxxf

12

1 1

21 )(),...,,(n

l

n

m

mmlln xxxxf

where the 2n+1 functions l’s are real and continuous.

Thus, a total of n(2n+1) functions ml() and 2n+1 functions l

of one variable represent a function of n variables

What is the mathematical basis behind Artificial Neural Networks?

Neural networks are universal approximators of multivariate

non-linear functions.

CYBENKO (1989):

Let (•) be a sigmoidal continuous function. The linear combinations

N

j

n

i

jijij wx0 1

What is the mathematical basis behind Artificial Neural Networks?

How to train an ANN?

Setting the ANN parameters

The ANN parameters to be set

• Once the ANN architecture has been fixed (number of

layers, number of nodes for layer), the only parameters

to be set are the synapsis weights (wlj , wjk)

wjk

wlj

x1(1), x2

(1) | t(1)

x1(2), x2

(2) | t(2)

x1(p), x2

(p) | t(p)

x1(np), x2

(np) | t(np)

Synapsis

Weights

W

Available input/output patterns

Setting the ANN Parameters: Training Phase

The Training Objective

Training Objective: minimize the average squared

output deviation error (also called Energy Function):

2

1 1

1

2

onnp

o

pl pl

p lp o

E u tn n

TRUE l-th output of

the p-th training

pattern

ANN l-th output of

the p-th training

pattern

The Training objective: graphical representation

• E is a function of

the ANN outputs

• E is a function of

the synapsis

weights [wjk]

The error back propagation algorithm

gradientLearning coefficient

Error Backpropagation (hidden-output)

𝐸 =1

2𝑛𝑜

𝑙=1

𝑛𝑜

𝑢𝑙𝑜 − 𝑡𝑙

2

Error Backpropagation (hidden-output)

𝐸 =1

2𝑛𝑜

𝑙=1

𝑛𝑜

𝑢𝑙𝑜 − 𝑡𝑙

2

𝜕𝐸

𝜕𝑤𝑙𝑗=

2 𝑢𝑙𝑜 − 𝑡𝑙

2𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑤𝑙𝑗

f( )

Output layer neuron

𝑦𝑙1𝑜 𝑦𝑙𝑗

𝑜 𝑦𝑙𝑛ℎ

𝑜

Error Backpropagation (hidden-output)

𝐸 =1

2𝑛𝑜

𝑙=1

𝑛𝑜

𝑢𝑙𝑜 − 𝑡𝑙

2

𝜕𝐸

𝜕𝑤𝑙𝑗=

2 𝑢𝑙𝑜 − 𝑡𝑙

2𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑤𝑙𝑗=

𝑢𝑙𝑜 − 𝑡𝑙

𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑦𝑙𝑜

𝜕𝑦𝑙𝑜

𝜕𝑤𝑙𝑗

=𝑢𝑙

𝑜 − 𝑡𝑙

𝑛𝑜𝑓′(𝑦𝑙

𝑜)𝜕 σ

𝑗′=1𝑛ℎ 𝑦𝑙𝑗′

𝑜

𝜕𝑤𝑙𝑗

f( )

Output layer neuron

𝑦𝑙1𝑜 𝑦𝑙𝑗

𝑜 𝑦𝑙𝑛ℎ

𝑜

Error Backpropagation (hidden-output)

𝐸 =1

2𝑛𝑜

𝑙=1

𝑛𝑜

𝑢𝑙𝑜 − 𝑡𝑙

2

𝜕𝐸

𝜕𝑤𝑙𝑗=

2 𝑢𝑙𝑜 − 𝑡𝑙

2𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑤𝑙𝑗=

𝑢𝑙𝑜 − 𝑡𝑙

𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑦𝑙𝑜

𝜕𝑦𝑙𝑜

𝜕𝑤𝑙𝑗

=𝑢𝑙

𝑜−𝑡𝑙

𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑦𝑙𝑜

𝜕 σ𝑗′=1

𝑛ℎ 𝑦𝑙𝑗′𝑜

𝜕𝑤𝑙𝑗=

𝑢𝑙𝑜−𝑡𝑙

𝑛𝑜

𝜕𝑢𝑙𝑜

𝜕𝑦𝑙𝑜 𝑢𝑗

ℎ 𝑦𝑙𝑗𝑜

j

l

wlj

𝑢𝑗ℎ

𝑦𝑙𝑗𝑜 = 𝑤𝑙𝑗 ∙ 𝑢𝑗

Error Backpropagation (hidden-output)

𝐸 =1

2𝑛𝑜

𝑙=1

𝑛𝑜

𝑢𝑙𝑜 − 𝑡𝑙

2

Error Backpropagation (hidden-output)

Error Backpropagation (hidden-output)

momentum

Error Backpropagation (input- hidden)

Error Backpropagation (hidden-input)

Updating weight wjk (hidden-input connections)

Similarly to the updating of the output-hidden weigths,

being

1( ) ( 1)i

jk j j jk

o

w n u w nn

Learning coefficient Momentum

After training:

• Synaptic weights fixed

• New input retrieval of information in the weights output

Capabilities:

• Nonlinearity of sigmoids NN can learn nonlinear mappings

• Each node independent and relies only on local info (synapses)

Parallel processing and fault-tolerance

Utilization of the Neural Network

CONCLUSIONS [ANN]

Advantages:

➢No physical/mathematical modelling efforts.

➢Automatic parameters adjustment through a training phase based on available input/output data. Adjustments to obtain the best interpolation of the functional relation between input and output.

Disadvantages:

“black box” : difficulties in interpreting the underlying physical model.

ANN for fault diagnostics

72

Objective

• Build and train a neural network to classify different malfunctions in the plant component

73

Input/Output patterns

• Input: Signal Measuraments

• Output: number (label )of the class of the failure

74

Training set

classxxxxxx

classtxtxtxtxtxtx

classtxtxtxtxtxtx

classxxxxxx

)35()36()35()36()35()36(

)()1()()1()()1(

)1()()1()()1()(

)1()2()1()2()1()2(

10102211

10102211

10102211

10102211

75

ANN for fault prognostics

76

ANN for fault prognostics

• Possible approaches• Learning directly from data the component RUL

• Modeling cumulative damage (health index) and then extrapolating out to a damage (health) threshold

77

ANN for fault prognostics

• Possible approaches• Learning directly from data the component RUL

• Modeling cumulative damage (health index) and then extrapolating out to a damage (health) threshold

Direct RUL Prediction: The model

ANN

…RUL

S1

Sn

Information necessary to develop the model

)(1 tx

t

• Signal measurements for a set of N similar components from degradation onset to failure

Similar Component 1:Data available

)(txn

t

Information necessary to develop the model

)(1 tx

t

)(txn

t

• Signal measurements for a set of N similar components from degradation onset to failure

ANN training data

Input Output

RUL… … …

1

1… … …

Trajecto

ry 1

… … … … … …. … ….

… … … … … …. … ….

… … … … … …. … ….

𝑡𝑓(1)

− 𝑚 − 1

𝑡𝑓(𝑁)

− 𝑚 − 1

How to set the ANN architecture

1 2 3 4 5 6 7 8 9 10 …

Error on the validation set

Recommended