Upload
davoodnasehi
View
222
Download
0
Embed Size (px)
Citation preview
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
1/84
Norwegian University of Science and Technology
Faculty of Social Sciences and Technology Management
Department of Industrial Economics and Technology Management
Master Thesis
PARTIAL AND IMPERFECT TESTING OF
SAFETY INSTRUMENTED FUNCTIONS
Hanne Rolén
June 2007
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
2/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
3/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
4/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
5/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
6/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
7/84
MASTEROPPGAVE
Vårsemester 2007
Student Hanne RolénInstitutt for industriell økonomi og teknologiledelse
ERKLÆRING
Jeg erklærer herved på ære og samvittighet at jeg har utført ovennevnte hovedoppgave selvog uten noen som helst ulovlig hjelp
Lysaker, 8.juni 2007
____________________________________________________________Signatur
Besvarelsen med tegninger m.v. blir i henhold til Forskrifter om studier ved § 20, NTNU's
eiendom. Arbeidene - eller resultater fra disse - kan derfor ikke utnyttes til andre formål utenetter avtale med de interesserte parter.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
8/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
9/84
PREFACE
This master thesis was written during 20 weeks throughout spring 2007 as the final workperformed by Hanne Rolén at Norwegian University of Science and Technology (NTNU). The
thesis is written within the Department of Industrial Economics and Management, studyfield Health, Environment and Safety. It is also closely related to the Department ofIndustrial Production and Quality as the thesis is within the field of technical safety. Thethesis is written in close cooperation with Aker Kværner Subsea.
Intended audience are those with knowledge of reliability theory, and it is recommended that
the reader is familiar with the concepts described in the book “System Reliability Theory” byRausand and Høyland (second edition, 2004).
I would like to thank my colleagues at Aker Kværner Subsea for the support throughout the
semester, and especially Thor Ketil Hallan as the supervisor. Further on I would like to thankRing-O, Lars Bak (Lilleaker Consulting), and Luciano Sanguineti and Enrico Sanguineti at ATV for giving me the necessary practical understanding of valves. And finally thanks toMary Ann Lundteigen (NTNU) for good discussions and Marvin Rausand (supervisor at
NTNU) for important feedback and input throughout the thesis.
Lysaker, 8th of June 2007
Hanne Rolén
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
10/84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
11/84
Partial and imperfect testing of SIF Introduction
11
SUMMARY
In order to avoid substantial hardware costs of building platforms, moving petroleum
production facilities subsea is becoming a popular solution. Fields can be remotely operated
and stand-alone fields that would not be profitable to develop separately can now be tiedtogether to one pipeline/riser and hence save expenses. Safety instrumented systems are
implemented to reduce or eliminate unacceptable risk associated with such production, andthe safety integrity level is a common requirement describing the safety availability of the
equipment. When performing analysis of the consistency of the safety functions to perform when needed, it is important to evaluate the assumptions that form the basis for thecalculations. The author has in particular assessed the assumption that a component is “asgood as new” after each proof test, meaning that the unavailability is reduced to zero.
The reasons for imperfect tests may be related to the five M-factors; method, machine,
milieu, man-power and material. Through different case studies the potential effects ofimperfect tests have been analyzed. SINTEF has proposed a method for including the
systematic failures in the calculations of the probability of failure on demand (PFD) by
adding a constant value called PTIF (test independent failures) in the PDS method. A methodfor quantifying the PFD impact of an imperfect test due to non-testable random hardwarefailures have been proposed by the author. Case results indicate that the PFD impact is farmore significant for imperfect testing of hardware failures than the PDS approach for
systematic failures.
Implementing partial stroke testing enables to reveal failure modes only before possiblethrough tests that require process shutdown. A successful implementation may improve the
safety integrity level rating of the system. The use of partial stroke testing in subseapetroleum production has so far not been common and several of the arguments for and
against implementing partial stroke testing are assessed.
It has been argued that partial stroke testing leads to an increase of the spurious trip rate, as
it is likely that if it starts to move it will continue to closed position. The likely reasons forsuch an event were placed in a Bayesian belief network and proved the need for the rightequipment to be implemented. New devices such as smart positioners and digital valve
controllers have been introduced for the purpose of partial stroke testing, reducing thehuman interference in partial stroke testing and thus reducing the causes for spurious trips.
Partial stroke testing may be implemented in order to justify extended proof test intervals. As
common cause failures are those failures that happen within the same proof test interval, anextension of the interval could imply that more failures are classified as common cause
failures (Rausand, 2007). In such situations, it should be discussed whether the -factorshould be incremented to reflect the PFD impact this may have.
Another argument for implementing partial stroke testing has been the opportunity to reducethe hardware fault tolerance (implies cost saving) since the safe failure fraction is
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
12/84
Partial and imperfect testing of SIF Introduction
12
incremented by detecting more dangerous undetected failures and converting them to
dangerous detected failures. As partial stroke testing is not fulfilling the criteria for adiagnostic test, it is argued that the partial stroke testing should not be used to affect the safefailure fraction, and hence can not be an argument for a reduction in the hardware fault
tolerance (McCrea-Steele, 2006).
Based on a failure mode assessment of gate valves and OREDA data the author has proposeda tentative partial stroke testing coverage factor of 62%. The result is in accordance withformer research. The partial stroke testing coverage for the dangerous failure modes fail toclose, leakage in closed position, delayed operation and external leakage in closed position
could not be justified quantitatively as the production companies do not give such detailedinformation. The coverage may differ dependent on the valve type, design and productionenvironment.
In particular for components with higher failure rates, from 6100.1 −⋅= DU λ 1−hours andabove, investing in partial stroke testing can be recommended. Achieving the exact partialstroke testing coverage is less important than the test frequency. The positive PFD impact isgreater if the tests are carried out often than to improve the coverage by additional 10%.
On the other hand, a reduction of the non-testable part with 10% yields a greaterimprovement of the PFD than obtaining both higher partial stroke testing coverage andshorter test intervals. Hence the focus should be upon diminishing reasons for why the testshould be unsuccessful. The author performed a case study of the Morvin HIPPS (high
integrity pipeline protection system) that confirms this outcome. Ignoring the estimation ofnon-testable failures yields inaccurate PFD results and inaccurate rating of the safetyintegrity level. Considering the use of safety instrumented systems is becoming the commonapproach for reducing risk in the petroleum production industry, it is important to improvethe quality of these calculations.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
13/84
Partial and imperfect testing of SIF Introduction
13
INDEX
PREFACE 9
SUMMARY 11
INDEX 13
LIST OF TABLES 15
LIST OF FIGURES 15
TERMS AND ABBREVIATIONS 16
1 INTRODUCTION 17
1.1 BACKGROUND 17
1.2 OBJECTIVES 17 1.3 DELIMITATIONS 17
1.4 SCIENTIFIC APPROACH 18
1.5 STRUCTURE OF THE REPORT 19
2 THEORETICAL FRAMEWORK 20
2.1 STANDARDS AND GUIDELINES 21
2.1.1 IEC 61508 & IEC 61511 21
2.1.2 OLF 070 GUIDELINE 23 2.2 R ELIABILITY DATA SOURCES 23
2.2.1 OREDA 23
2.2.2 PDS 23
2.2.3 EXIDA 24
2.3 SAFETY INSTRUMENTED FUNCTIONS 25
2.3.1 MAIN PRINCIPLES 25
2.3.2 AVAILABILITY OF SIF 28
2.3.3 SIS REQUIREMENTS 31
2.4
SIS APPLIED IN SUBSEA XMT 34
2.5 TESTING OF THE SIS’ ABILITY TO PERFORM THE SIF 37
3 IMPERFECT TESTING 39
3.1 CAUSES FOR AN IMPERFECT TEST 39
3.2 EFFECTS OF AN IMPERFECT TEST 41
3.2.1 CASE A, CONSTANT PFD ADDITION 43
3.2.2 CASE B, I NCREASING PFD ADDITION 46
3.2.3 CASE C, DECREASING PFD 52
3.2.4 COMMENTS TO THE IMPERFECT TEST CASES 54
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
14/84
Partial and imperfect testing of SIF Introduction
14
4 PARTIAL STROKE TESTING 55
4.1 MAIN PRINCIPLES AND CONCEPTS 55
4.2 ADVANTAGES AND DISADVANTAGES 57
4.3 PST COVERAGE FACTOR 58
4.4 CORRELATION PST AND SPURIOUS TRIPS 62
4.5 INFLUENCING FACTORS FOR PST CONTRIBUTION FOR A SIS 63
4.6 PST IMPACT ONTO THE SIL 65
5 DISCUSSION 69
5.1 QUALITY OF THE RELIABILITY ASSESSMENT 69
5.2 UNCERTAINTY REGARDING THE RESULTS 71
5.3 R ECOMMENDATIONS FOR FURTHER WORK 71
6 CASE STUDY 72
6.1 INTRODUCTION CASE STUDY; MORVIN 72
6.2 R EQUIREMENTS FROM CUSTOMER 72
6.3 HIPPS 74
6.3.1 HIPPS TESTING 75
6.4 SIL RATING 76
7 CONCLUDING REMARKS 79
R EFERENCES 81
ANNEX A, XMT 84
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
15/84
Partial and imperfect testing of SIF Introduction
15
LIST OF TABLESTABLE 1, SIL FOR LOW AND HIGH DEMAND MODE OF OPERATION (IEC 61508-1, 2002)............................................................26
TABLE 2, SIL FOR TYPE A SUBSYSTEM (IEC 61508-2) ............................ ............................... ............................... ................32
TABLE 3, SIL FOR TYPE B SUBSYSTEM (IEC 61508-2)...........................................................................................................32
TABLE 4, DATA FOR THE SYSTEM TEST EXAMPLE ..................................................................................................................41
TABLE 5, U NAVAILABILITY AT TIME T OF A SINGLE COMPONENT UNDER IMPERFECT TEST CONDITIONS .......................................48TABLE 6, PFD AVERAGE DIFFERENCES BETWEEN PERFECT AND IMPERFECT TESTS ....................................................................49
TABLE 7, MATRIX FOR SIL RATING SENSITIVITY DUE TO IMPERFECT TESTING ..........................................................................50
TABLE 8, DANGEROUS FAILURE MODES AND TEST STRATEGY FOR A SAFETY GATE VALVE (ADAPTED FROM SUMMERS & ZACHARY
2000A, MCCREA-STEELE 2006, KOP 2002, BAK 2007 AND ATV 2007).. ............................... ................................ .....59
TABLE 9, R ELIABILITY DATA AS BASIS FOR PST COVERAGE ESTIMATION (ADAPTED FROM LUNDTEIGEN & R AUSAND, 2007).......61
TABLE 10, PFD RELATED TO DIVERSE PST COVERAGES, TEST INTERVALS AND (IM)PERFECT TESTING........................................68
TABLE 11, MORVIN HIPPS REQUIREMENTS (STATOIL, 2007A)...............................................................................................73
TABLE 12, MORVIN HIPPS CASE DATA ...............................................................................................................................76
LIST OF FIGURES
FIGURE 1, IEC 61508 SAFETY LIFECYCLE (IEC 61508).........................................................................................................22FIGURE 2, SKETCH OF A SIMPLE SIS (R AUSAND & HØYLAND, 2004).......................................................................................25
FIGURE 3, ALLOCATION OF SIL (IEC 61508-1).....................................................................................................................27
FIGURE 4, R ISK REDUCTION (IEC 61508) ............................... ............................... ................................ ...............................27
FIGURE 5, FAILURE MODE CLASSIFICATION (IEC 61508) .............................. ............................... ................................ ..........28
FIGURE 6, FRACTIONS OF DIFFERENT TYPES OF FAILURES FOR A SYSTEM WITH TWO COMPONENTS .............................................30
FIGURE 7, SIS DESIGN REQUIREMENTS .................................................................................................................................31
FIGURE 8, WELLHEAD AND XMT (OREDA, 2002)...............................................................................................................34
FIGURE 9, HORIZONTAL XMT (AKS, 2007).........................................................................................................................35
FIGURE 10, GATE VALVE WITH ACTUATOR (R ING-O, 2007) ............................... ............................... ................................ .....36
FIGURE 11, UP AND DOWN TIME RELATED TO TESTS ..............................................................................................................38
FIGURE 12, CAUSES FOR IMPERFECT TESTING OF SUBSEA SAFETY VALVES ...............................................................................40
FIGURE 13, R ELIABILITY BLOCK DIAGRAM OF A SIMPLE SIS..................................................................................................41
FIGURE 14, CONTRIBUTION TO UNAVAILABILITY (PDS METHOD, 2006) .............................. ............................... .....................43
FIGURE 15, SKETCH OF THE PFD IMPACT WITH PTIF ADDITION (CASE A) ............................. ............................... .....................44
FIGURE 16, U NAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE A..............................................................................44
FIGURE 17, SKETCH OF THE PFD IMPACT WITH IMPERFECT TEST ADDITION (CASE B)................................................................46
FIGURE 18, SERIES STRUCTURE WHEN IMPERFECT TEST OF A COMPONENT ...............................................................................46
FIGURE 19, U NAVAILABILITY UNDER IMPERFECT TEST CONDITION CASE B ............................... ............................... ................47
FIGURE 20, U NAVAILABILITY FOR DIFFERENT FAILURE RATES UNDER IMPERFECT TESTING........................................................49
FIGURE 21, THE M-FACTORS’ CONTRIBUTION TO THE IMPERFECT TEST ADDITION ....................................................................50
FIGURE 22, U NAVAILABILITY WITH DECREASING PTIF ADDITION (CASE C1).............................................................................52
FIGURE 23, U NAVAILABILITY WITH DECREASING IMPERFECT TEST ADDITION (CASE C2) ............................ ...............................53
FIGURE 24, PFD RESULTS FROM CASE STUDIES ON IMPERFECT TESTING ...................................................................................54FIGURE 25, PST IMPACT ON THE PFD (LUNDTEIGEN & R AUSAND, 2007)................................................................................55
FIGURE 26, SIMPLE SIS WITH PST IMPLEMENTATION (ADAPTED FROM MCCREA-STEELE, 2006) .............................. ................56
FIGURE 27, OVERVIEW OF RELEVANT FAILURE RATES (LUNDTEIGEN & R AUSAND, 2007) ............................... ..........................58
FIGURE 28, BAYESIAN BELIEF NETWORK FOR ST DURING PST ........................... ............................... ................................ .....62
FIGURE 29, U NAVAILABILITY WITH PST ........................... ............................... ............................... ................................ .....65
FIGURE 30, U NAVAILABILITY WITH PST AND PTIF ADDITION..................................................................................................66
FIGURE 31, U NAVAILABILITY WITH PST AND IMPERFECT TESTING..........................................................................................67
FIGURE 32, HIPPS SCHEMATIC (KOP, 2004)........................................................................................................................74
FIGURE 33, HIPPS RELIABILITY BLOCK DIAGRAM FOR MORVIN FIELD DEVELOPMENT ..............................................................76
FIGURE 34, PFD RESULTS FOR THE DIFFERENT CALCULATION APPROACHES .............................................................................78
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
16/84
Partial and imperfect testing of SIF Introduction
16
TERMS AND ABBREVIATIONS
ALARP As low as is reasonably practicable
CCF Common Cause Failures
CSU Critical Safety Unavailability
DCDD
DUDOP
Diagnostic CoverageDangerous Detected
Dangerous UndetectedDelayed Operation
E/E/PE(S)ELPESDFT
FTC
Electrical/Electronic/Programmable Electronic (System)External Leakage in closed PositionEmergency Shut DownFunction Test
Fail To Close
FME(C)A Failure Mode Effect and (Criticality) AnalysesHAZOP Hazard and Operational Analyses
HFT Hardware Fault Tolerance
HIPPSHP/HT
High Integrity Pressure Protection SystemHigh Pressure High Temperature
HSE Health, Safety and Environment
IEC International Electrotechnical Commission
ISOKOP
LCP
International Organization for StandardizationKværner Oilfield Products
Leakage in Closed Position
OLF The Norwegian Oil Industry Association
OREDAPDSPFD
PMVPWVPSTPT
Offshore Reliability DataReliability of computer-based safety systems (Norwegian)Probability of Failure on Demand
Production Master ValveProduction Wing ValvePartial Stroke TestingProof Test
RBD Reliability Block Diagram
ROVSCSSV
Remotely Operated VehicleSurface Controlled Subsurface Safety Valve
SD Safe Detected
SFFSIF
SIL
SISSRSST
SUXMT
Safe Failure FractionSafety Instrumented Function
Safety Integrity Level
Safety Instrumented SystemSafety Requirement SpecificationSpurious Trip
Safe UndetectedX-mas Tree
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
17/84
Partial and imperfect testing of SIF Introduction
17
1 Introduction
1.1 Background As safety instrumented systems are getting increasingly important within petroleum
production, there is a need to have a good understanding of the assumptions andsimplifications that form the basis for the assessment. Reliability calculations of subseaproduction systems are usually based on the IEC 61508 (2002) approach and utilizingOREDA (2002) data. A basic assumption is that after test and repair, the system is “as goodas new”, meaning that the system unavailability is reduced to zero. As the demand for
continuous production extends the test intervals further every time, it is increasinglyimportant to study these aspects more profoundly. Are the safety levels as good as claimed?
Is it required to change the calculation-method in order to reflect the reality?
The topic of this thesis has been developed in cooperation with Marvin Rausand at NTNUand Thor Ketil Hallan at Aker Kværner Subsea, as this is of interest for both parties.
Intended audience are those with knowledge of reliability theory, and it is recommended that
the reader is familiar with the concepts described in the book “System Reliability Theory” byRausand and Høyland (second edition, 2004). Basic knowledge of subsea petroleumproduction is also an advantage.
1.2 Objectives
The main objectives have been to;
• Study the IEC 61508 (2002) and IEC 61511 (2004) standards and OLF 070 guideline
(2004)
• Describe the causes for imperfect tests
• Estimate the impact of imperfect tests on the probability to fail on demand
• Describe partial stroke testing (PST)
• Estimate the impact of PST to the probability to fail on demand
• Perform a case study from one of Aker Kværner Subsea’s actual projects
1.3 Delim itat ions
The time scope to carry out the master thesis is set to 20 weeks; hence it has been a need tochoose more specific topics to assess. Since the author has been present at Aker Kværner
Subsea during the thesis period, the focus is upon reliability in subsea petroleum production.
Because of the author’s special interest in safety topics, the reliability assessment is limited tothe safety reliability (availability), excluding the production reliability.
The purpose of the case study is to relate the results to an actual field. The weight of thethesis is to be found in assessment of PST and imperfect testing.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
18/84
Partial and imperfect testing of SIF Introduction
18
1.4 Scienti f ic appro ach
The information gathering process has basically been done by literature research andfeedback from experts. In order to learn about subsea systems in general, and Aker KværnerSubsea products in particular, the author has been present at Aker Kværner Subsea facilities
during the whole semester, getting input from the engineers day by day. This has beensignificant for the progress of the thesis. The author’s knowledge to subsea systems waslimited prior to the thesis start-up and as a consequence a substantial effort was put intogetting familiarized with the topic. The meetings with valve suppliers and externalprofessionals with experience from the field gave a valuable insight to the subsea petroleum
production.
As the topic of the thesis is closely connected to the OLF 070 guideline (2004), IEC 61508(2002) and IEC 61511 (2004) standards, it was natural to study these. Further on it has beenan extensive search for literature, as the concept of partial stroke testing is fairly new and
imperfect testing is hardly discussed in the literature. Doing so, Engineering Village andScienceDirect were of great importance, in addition to the recommendations from mysupervisor. The search engine Google has also been utilized to find additional information.
The IEC 61508/61511 standards hardly mention imperfect testing of SIS while the PDSmethod (2006) has a different approach than the standards, making it interesting to developnew concepts and approaches to testing. The progress and quality of the work has beenassured through discussions and feedback from my supervisor and other key personnel at the
university as well as colleagues at Aker Kværner Subsea.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
19/84
Partial and imperfect testing of SIF Introduction
19
1.5 Structur e of the report
The master thesis has the structure as described below;
Chapter 1
Introduction
Presentation of the background of the master thesis topic, its
objectives, delimitations and scientific approach.
Chapter 2Theoretical framework
Introduction to IEC 61508/61511, OLF 070 guideline(2004), PDS method (2006), OREDA (2002) and exida
(2003). Description of basic concepts; SIF, SIS, safety
unavailability contributors, failure classification, failuremodes and testing. Briefly about subsea X-mas trees andsafety valves.
Chapter 3
Imperfect testing
Imperfect testing is defined, and its possible reasons
described. Three cases are developed to illustrate possible
effects of imperfect testing. A new method for quantifyingimperfect tests is proposed.
Chapter 4
Partial stroke testing
Description of partial stroke testing and its advantages and
disadvantages. Assessment of the rationale behind thepartial stroke testing coverage factor. The correlation
between the partial stroke testing and spurious trips andother influencing factors of the partial stroke testing
contribution is described. Partial stroke testing is applied on
the case results from chapter 3.
Chapter 5Discussion
Discussion about the results of imperfect and partial stroketesting, the credibility of the data utilized and the need for
further work.
Chapter 6Case The theories from the former chapters are used for a real lifesystem; the Morvin field development.
Chapter 7
Concluding remarks
Concluding remarks regarding imperfect testing and partial
stroke testing.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
20/84
Partial and imperfect testing of SIF Theoretical framework
20
2 Theoretical framework
Subsea equipment is becoming the typical solution for the petroleum offshore industry as the
production is moved to deeper and more demanding areas. It is expected to be a substantialincrement of the subsea petroleum and gas production in the years to come (Subseazone,2007). At the same time it can be claimed that there has been an increased concern regardinghealth, safety and environment (HSE) among the general public and governments the last years, which has lead to a more strict legislation within the field. Together with the high costs
for subsea intervention this gives incentives for the oil companies to achieve a high level ofsafety and reliability of their systems.
Safety instrumented systems (SIS) have become ever more common as a measure for
reducing risk. A SIS is designed to prevent, or mitigate, the hazardous event that could harm
the system in which it is implemented to protect. Examples of hazards related to subseaproduction are blowouts topside (possible personnel fatalities and material damage) andleakage to water (environmental danger). One SIS can perform one or several safetyinstrumented functions (SIF).
With this increased dependency of SIS to mitigate risk, it is crucial to be aware of theassumptions and simplifications that form basis for the reliability calculations. In thefollowing short introductions to the important standards and guidelines within the field are
given, as well as a more thorough description of SIF.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
21/84
Partial and imperfect testing of SIF Theoretical framework
21
2.1 Standard s and guid el ines
The IEC 61508 (2002) standard and the more process specific IEC 61511 (2004) standard aresafety standards that state requirements for the use of SIF. OLF has developed a guideline for
the implementation of the two standards (2004).
2.1.1 IEC 61508 & IEC 61511
The IEC 61508 shall be applied whenever “there is a possibility that E/E/PE technologiesmight be used, (…) so that the functional safety requirements for any E/E/PE safety-related
systems are determined in a methodical, risk-based manner” (IEC, 2002). If the hardwaredevice already has been proven in use, IEC 61511 can be followed as this standard focus on
the integration of such components. Note that the E/E/PE (Electrical/Electronic/Programmable Electronic) safety related systems is referred to as SIS (safety instrumentedsystem) throughout this thesis.
IEC 61508-6 states that “the overall goal is to ensure that plant and equipment can be safelyautomated. A key objective of this standard is to prevent failures of control systems triggeringother events, which in turn could lead to danger, and (to prevent) undetected failures in
protection systems, making the systems unavailable when needed for a safety action”.
The IEC 61508 standard is divided in following 7 parts;Part 1, General requirements Specifies the requirements that are applicable to all parts.
Introduces the safety life cycle perspective as the technical
framework for the standard.
Part 2, Requirements forelectrical/electronic/ programmableelectronic safety-related systems
Provides additional and more specific requirements for thehardware than the first part. Specifies the requirements foractivities in the design and manufacturing phase.
Part 3, Software requirements Provides additional and more specific requirements for thedesign and development of the software than the first part.
Part 4, Definitions and abbreviations Lists all the definitions and abbreviations used throughout
the standard.
Part 5, Examples of methods for thedetermination of safety integrity levels
Describes the underlying concepts of risk and givesmethods for determining safety integrity levels.
Part 6, Guidelines on the application ofIEC 61508-2 and IEC 61508-3
Gives a guideline for the application of part 2 and part 3 with examples and methods.
Part 7, Overview of techniques andmeasures
Gives an overview of techniques and measures for controlof hardware failures and avoidance and control of
systematic failures.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
22/84
Partial and imperfect testing of SIF Theoretical framework
22
The IEC 61508 has a lifecycle approach for the SIS as presented in Figure 1. Following the
steps in the lifecycle is to ensure that the SIF is achieved through a systematic approach to allthe necessary activities.
Figure 1, IEC 61508 Safety lifecycle (IEC 61508)
The system design complies with the IEC 61508 and IEC 61511 standards when the company
accomplishes the requirements related to:• Management of functional safety
• Safety lifecycle requirements
• Verification
• Process hazard and risk analysis
• Allocation of safety functions toprotection layers
• SIS safety requirements specification
• SIS design and engineering
• Factory acceptance testing (FAT)• SIS installation and commissioning
• Requirements for applicationsoftware, including selection criteriafor utility software
• SIS safety validation
• SIS operation and maintenance
• SIS modification
• SIS decommissioning
• Information and documentation
requirements
The calculation of the reliability of SIF is only a small part of the IEC 61508 compliance.Some of the assumptions and simplifications done in the IEC standard related to reliabilityare assessed in this thesis. Lundteigen & Rausand (2006) stated; “The standards are not
prescriptive, which gives room for different interpretations, and hence opens up for newmethods, approaches and technology”.
The implications the IEC 61508 standard has for SIS is described more in detail throughoutchapter 2.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
23/84
Partial and imperfect testing of SIF Theoretical framework
23
2.1.2 OLF 070 guideline
The purpose of the OLF 070 guideline (2004) is to give a guideline for the application of IEC61508 and IEC 61511 in the Norwegian petroleum industry that is simplified in use. Note that while the standards are risk-based, meaning that the users have to determine the risks
related to the system and on basis of this state the required SIL of the SIF, the OLF 070guideline provides minimum SIL requirements for the most common SIFs, giving theguideline a different approach to assessing the SIS than the intentions in the IEC 61508standard.
2.2 Reliabi l i ty data so urces
2.2.1 OREDA
OREDA (Offshore REliabilty DAta) (2002) collects and exchanges reliability data among theparticipating companies and act as the forum for co-ordination and management of
reliability data collection within the oil and gas industry (OREDA, 2007). It was initiated in1981 and has issued four public editions of a Reliability Data Handbook (1984, -92, -97, -02).
In OREDA a failure is either classified as critical, degraded or incipient.
• Critical failure; “a failure that causes immediate and complete loss of a systemscapability of providing its outputs”.
• Degraded failure; “a failure which is not critical, but which prevents the system from
providing its outputs within specifications. Such a failure would usually, but not
necessarily, be gradual or partial, and may develop into a critical failure in time”.• Incipient failure; “a failure which does not immediately cause loss of a system’s
capability of providing its output, but which, if not attended to, could result in acritical or degraded failure in the near future”.
The failure has been associated with one of these severity classes independent of the failuremode and failure cause, meaning that a “leakage in closed position” failure mode can be
found listed both as a critical and as a degraded failure.
2.2.2 PDS
PDS is the Norwegian abbreviation for “reliability of computer-based safety systems”.SINTEF is the author of both a PDS Method Handbook and a PDS Data Handbook. The PDSapproach is described in the first, while the latter contains a data dossier for differentcomponents. These are based on OREDA, but the project group have done some adjustmentsand expert judgements upon the figures. The PDS method is in line with the main principles
in the IEC 61508 standard, except of a somewhat different approach regarding failureclassification, modelling of common cause failures and the treatment of systematic failures(PDS Method, 2006). Of special relevance for this thesis are the quantification of systematicfailures called PTIF (test independent failures, TIF), and the concept of Critical Safety
Unavailability (CSU) which in addition to the IEC 61508 approach to PFD calculation also
includes downtime due to test and repair.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
24/84
Partial and imperfect testing of SIF Theoretical framework
24
2.2.3 Exida
Exida (excellence in dependable automation) provides reliability data for use in process andmachinery industries. The numbers are based on FMEDA data or exida comprehensiveanalysis with data from OREDA, PDS etc. The main reliability concepts and failure
classifications correspond to a great extent those described in IEC 61508.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
25/84
Partial and imperfect testing of SIF Theoretical framework
25
2.3 Safety Instrum ented Functio ns
Within every industry there is a need to perform hazard identification methods such asHAZID, HAZOP and FMEA studies in order to decide the need for extra safety measures.
Different measures may be implemented and among those introducing SIF. This concept isthoroughly described in the following.
2.3.1 Main principles
SIFs are functions whose purpose is to achieve or maintain a safe state for the system. TheNorwegian “Facilities regulations” § 7 state that SIF has several purposes. It should detect
abnormal circumstances, it should prevent that such abnormal situations escalates into adangerous state, and it should mitigate damages in case of accidents. Lundteigen & Rausand(2006) describe SIF even simpler; detect – decide – act. There are two concepts regarding
SIFs; “safety function requirements” and “safety integrity requirements”. While the first is which SIF the SIS should do, the latter is regarding the likelihood that the SIS is able toperform the specific SIF satisfactorily within a stated period of time (IEC, 2002).
Any system, implemented in any technology, which carries out SIF, is a SIS (IEC, 2002). ASIS covers all parts of the system that are required to carry out a SIF, and may consist of forexample the subsystems; sensors, control logic and communication systems, final actuator,
and the critical actions of a human operator (IEC, 2002). In this thesis the actuating itemsare analogous with safety valves. Figure 2 is an illustration of a simple SIS.
Figure 2, Sketch of a simple SIS (Rausand & Høyland, 2004)
Examples of SIS are among others the emergency shut-down system in a hazardous chemical
process plant, automobile indicator lights, anti-lock braking and engine-managementsystems, and remote monitoring, operation or programming of a network-enabled process
plant (adapted from Rausand & Høyland, 2004).
The safety integrity level (SIL) specifies the safety integrity requirements of the SIF to be
allocated to the SIS. It states the probability of the SIS to fail to perform the requested SIFupon demand, often referred to as the PFD (probability of failure on demand). The PFD may be interpreted in two ways; the probability that the system will be in a dangerous failure
mode upon demand, and the fraction of time the system will be in a dangerous failure modeand not work as a SIF. In order to attain the requested SIL it is also required to avoid and
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
26/84
Partial and imperfect testing of SIF Theoretical framework
26
control systematic failures and to select the hardware configuration within the architectural
constraints. These requirements are further described in section 2.3.3.
The IEC 61508 standard divides the SIL into 4 levels, where the highest SIL rating states the
lowest probability that the SIS will fail to perform the required SIF. Depending if it is a low or
high/continuous demand mode of operation, the range of the levels differ as shown in Table1. Low demand mode embrace systems “where the frequency of demands for operation madeon a safety related system is no greater than one per year and no greater than twice the proof-test frequency” (IEC 61508, 2002), otherwise it is classified as a high demand system. Anexample of a low demand application in subsea production is the safety valve where the valve
remains static until a demand occurs. An application in high demand mode can for example be the brake system in a car.
Table 1, SIL for low and high demand mode of operation (IEC 61508-1, 2002)
Safetyintegrity
level
Low demand mode of operation(Average probability of failure to
perform its design function ondemand)
High demand or continuousmode of operation
(Probability of a dangerous failureper hour)
4 45 1010 −−
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
27/84
Partial and imperfect testing of SIF Theoretical framework
27
Figure 3, Allocation of SIL (IEC 61508-1)
The IEC 61508 standard, the Norwegian “Activities regulations” § 1 -2 and the Haddon
energy model (1973) all have the same philosophy regarding barriers. In order to reduce risk,the priority should be to apply measures in the design, as this is the best way to eliminate thehazards. If this does not reduce the risk to the tolerable region (reference ALARP principle asdescribed in IEC 61508-5 and the Norwegian “Framework regulations” § 9). Barriers are
introduced in order to prevent or mitigate impact on people, environment and/or materialassets. A SIS can be such a barrier as shown in Figure 4. Further reduction of risk than
strictly necessary in order to come within the acceptable area should be done as long as this iseconomically reasonable.
Figure 4, Risk reduction (IEC 61508)
Note that introducing a SIS is only one measure among others. It is equally important to
introduce other barriers.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
28/84
Partial and imperfect testing of SIF Theoretical framework
28
2.3.2 Availability of SIF
Dangerous and safe failures
If the SIS fails to perform the intended SIF the system is brought to a fault state. The failurethat causes the fault can be either dangerous or safe. A dangerous failure is defined as a“failure which has the potential to put the safety-related system in a hazardous or fail-to-
function state”. Some of these failures can be revealed at an early stage through testing or bycoincidence by personnel, while others remain undetected. It is differentiated betweendangerous detected and dangerous undetected failures, and safe detected and safeundetected failures as illustrated in Figure 5.
Figure 5, Failure mode classification (IEC 61508)
Examples of dangerous detected failures are those revealed by diagnostic testing. On theother hand, a dangerous undetected failure is a failure not revealed before a proof test or by ademand. These failures are important to discover as soon as possible. Representative for a
safe failure is a spurious trip, which is for example that the safety valve closes without a realdemand. Note that to classify a failure according to these classes is not alwaysstraightforward and can easily be interpreted differently among the users.
Random hardware failures and systematic failures
Another way of classifying the failures is to differentiate between random hardware
(physical) failures due to aging and stress, and systematic (non-physical) failures due todesign and interaction (adapted from IEC 61508);
• Random hardware failures: Failures occurring in random time resulting from a
variety of degradation mechanisms in the hardware. Usually, only degradationmechanisms arising from conditions within the design envelope (natural conditions)are considered as random hardware failures. System failure due to such failures can be quantified with reasonable accuracy.
• Systematic failures: Failures that are related in a deterministic way to a certain cause, which can only be eliminated by modification of the design, manufacturing process,operational procedures, documentation or other relevant factors. Design faults and
maintenance procedure deficiencies are example of causes that may lead to systematicfailures. System failure due to systematic failures can not be easily predicted.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
29/84
Partial and imperfect testing of SIF Theoretical framework
29
It is of great interest to assess the unavailability of the safety system. It is only the dangerous
undetected failures that form the basis for the PFD calculation. With ! as the proof testinterval and F(t) the distribution function, the safety unavailability is expressed by PFD asgiven below;
" (t) = Pr(a DU failure has occurred at, or before, time t) = Pr(T%t) =F(t)
∫ =τ
τ0
1 PFD " ∫ =
τ
τ0
)(1
)( dt t F dt t
Calculation of safety unavailability is further explained in section 2.3.3, while detecting
failures through testing is treated in section 2.5 and further discussed in chapter 3 and 4.
Failure modes
The failure mode describes the various abnormal states of an equipment unit, and the
possible transition from correct to incorrect state (OREDA, 2002). The main failure modes of
the safety valve are (adapted from Lundteigen & Rausand 2007 and Rausand & Høyland,2004):
• FTC = Fail to close on demand
• LCP = Leakage (through the valve) in closed position
• ELP = External leakage in closed position
• DOP = Delayed operation
• ST = Spurious trip
• FTO = Fail to open on command When valves are designed to stop the flow (as emergency shutdown valves), the FTC and LCPfailure modes can be classified as dangerous failures, since the purpose of the valves are not
fulfilled (Rausand & Høyland, 2004). The valves are designed to be closed within a specificamount of time, and if this demand is not fulfilled it is classified as a DOP failure mode and isa dangerous failure. It is considered to be an ELP failure when there is a leakage to theexterior when the valve is in a closed position, and is classified as a dangerous failure.
As already mentioned, these failures can be classified both as a critical and as a degradedfailure in OREDA, leaving it up to the user how to interpret the data. In a safety perspective,
the ST and FTO failure modes do not imply danger, since for safety valves these failuremodes correspond to their safe position (closed).
Common cause failures and spurious trips
In order to make a system more failure resistant, a common approach is to introduceredundancy. There are two aspects that may reduce this benefit; spurious trips and common
cause failures. Redundancy for safety valves is often obtained by placing two valves in series,meaning that if one valve fails to close, the other can shut down the system instead. As ittakes only one valve to close down the system, redundancy could lead to a higher spurioustrip rate. For sensors this can be solved, or at least minimized, by introducing voting. Voting
is when 2 sensors out of 3 sensors are programmed to give an order to close the valves, thus
removing the possibility that one single sensor can command the valve to close. A similar
solution is not possible for valve.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
30/84
Partial and imperfect testing of SIF Theoretical framework
30
Component 1 Component 2
1 - 1 -
Common cause failures (CCF) may bring down both safety valves at the same time, reducing
the potential positive effects by introducing redundancy to the system. By IEC 61508definition a common cause failure is a “failure, which is the result of one or more events,causing coincident failures of two or more separate channels in a multiple channel system,
leading to system failure”. The -factor is one way of describing common cause failures
quantitatively, where the -factor gives the fraction of common cause failures among allfailures of a component;
Pr (Common cause failure | Failure) = This is illustrated in Figure 6. Rausand & Høyland (2004) give more details about the -factor model and other alternative models.
Figure 6, Fractions of different types of failures for a system with two components
Goble (2003) state that three principles should be followed in order to avoid CCF;1. Reduce the chance of a common stress –physical separation and electrical separation
in redundant units.2. Respond differently to a common stress – redundant units should use diverse
technology/mechanisms.
3. Increased strength against all failures. A method for estimating the common cause beta factor is provided in IEC 61508-6, or themaximum values given in table D.4 in IEC 61508-6 can be used directly.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
31/84
Partial and imperfect testing of SIF Theoretical framework
31
2.3.3 SIS requirements
There are several requirements in IEC 61508 related to the design of a SIS. As illustrated inFigure 7 these requirements can be classified related to hardware safety integrity, the system behaviour on detection of fault and systematic safety integrity. Safety integrity is theprobability that the SIS performs the required SIF under all the stated conditions within a
stated period of time (IEC 61508).
Figure 7, SIS design requirements
Hardware safety Integrity
In order to avoid the random hardware failures there are certain architectural constraintsthat limit the designer’s freedom on how the hardware may be configured (Lundteigen and
Rausand, 2006). There are two concepts that are related to architectural constraints, Safefailure fraction (SFF) and Hardware fault tolerance (HFT). The SFF can be interpreted in two ways, one is as the fraction of failures considered as safe versus the total failure rate. Theother is as the fraction of failures not leading to a dangerous failure of the SIF (op. cit).
∑ ∑ ∑∑∑
∑∑ ∑
++
+=
+=
DU DDS
DDS
Tot
DDS SFF
λλλ
λλ
λ
λλ
=S λ Rate of safe failures
= DDλ Rate of dangerous detected failures
= DU λ Rate of dangerous undetected failures
=TOT λ Total rate of dangerous and safe failures
Subsequently, the SFF can be increased by detecting more dangerous undetected failures andclassify them as dangerous detected.
The HFT is a measure of how many of the components can be lost without losing the propertyof being a safety function. A 1oo2 and 2oo3 architecture has a HFT of 1, while a 1oo3 systemhas a HFT of 2.
The highest SIL that can be claimed for a safety function is limited by the HFT and the SFF ofthe subsystems that carry out the safety function (IEC 61508-2). It is differentiated betweentype A and type B subsystems. Simplified, the subsystem is regarded to be of type B when the
subsystem consists of one or more components that have uncertainty regarding failuredata/modes or uncertainty of its behaviour in a fault mode, if otherwise the system is of type
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
32/84
Partial and imperfect testing of SIF Theoretical framework
32
A. A safety valve is normally defined as a type A subsystem (Lundteigen and Rausand, 2007).
Table 2 gives the attainable SIL rating under these constraints for type A subsystems.
Table 2, SIL for type A subsystem (IEC 61508-2)
Hardware fault toleranceSafe failure
fraction 0 1 2
< 60% SIL1 SIL2 SIL3
60% - < 90% SIL2 SIL3 SIL4
90% - < 99% SIL3 SIL4 SIL4
≥ 99% SIL3 SIL4 SIL4
The attainable SIL rating for type B subsystems is given in Table 3.
Table 3, SIL for type B subsystem (IEC 61508-2)
Hardware fault toleranceSafe failurefraction 0 1 2
< 60% Not allowed SIL1 SIL2
60% - < 90% SIL1 SIL2 SIL3
90% - < 99% SIL2 SIL3 SIL4
≥ 99% SIL3 SIL4 SIL4
Because of the difficulties of achieving and maintaining a SIL 4 throughout the safetylifecycle, applications which require the use of a single SIF with SIL 4 should be avoided where reasonably practicable (IEC 61511-1).
Related to hardware safety integrity there are also requirements for the PFD. In thecalculations it should be taken into account the system architecture, dangerous failuresundetected/detected by diagnostic tests, susceptibility to common cause failures, diagnostic
coverage, test intervals and repair times etc. The calculations should be done for each sub
element and gives the following formula for the SIS in Figure 2;
AI L DSYS PFD PFD PFD PFD ++=
Where the PFD is given by ∫ =τ
τ0
1 PFD " ∫ =
τ
τ0
)(1
)( dt t F dt t as presented in last section.
System behaviour on detection of fault
The requirements to the system behaviour on detection of fault are to specify an action to
achieve or maintain a safe state, or to assure a safe operation while repairs are carried out.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
33/84
Partial and imperfect testing of SIF Theoretical framework
33
Systematic safety integrity
The systematic safety integrity is related to evidence proven in use, avoidance of failures andcontrol of such failures. Evidence of proven in use is basically adequate documentation thatthe likelihood of any failure of the subsystem in the SIS is low enough in order to achieve the
required SIL for the SIF.
The requirements for avoidance of failures embrace the measures for preventing theintroduction of faults during design and development of the SIS hardware. Theserequirements have to be applied only on those systems which have not yet been proven inuse.
The requirements for the control of systematic failures emphasize that the design processshall make the SIS tolerant against residual design faults in the hardware and software,environmental stresses and mistakes made by the operator of the equipment under control.
The maintainability and testability shall be considered already at the design and developmentphase. Annex A and B of IEC 61508-2 give techniques of how to avoid and control systematicfailures.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
34/84
Partial and imperfect testing of SIF Theoretical framework
34
2.4 SIS appli ed in sub sea XMT
An example of a SIS in subsea installations are the safety valves in the X-mas Tree (XMT).The XMT itself is regarded as a secondary well barrier while the surface controlled subsurface
safety valve (SCSSV) is regarded as one of the primary well barriers.
The XMT is placed onto a wellhead on the seabed. Basically the XMT consists of a range of valves and measurement instruments. Its function is to be a connection point between the well and the flowlines, to give the opportunity to shut down the well in case of emergency, toguide the flow and to give facilities to control the well. The XMT can lead the flow directly or
indirectly through a manifold onshore/topside. Figure 8 shows a schematic of a XMT and wellhead. Note the location of the Production Master Valve (PMV) and the Production Wing
Valve (PWV) as these are considered to be the most important valves to close in anemergency situation. They are typically safety gate valves. Testing of these valves is discussed
in chapter 3 and 4.
Figure 8, Wellhead and XMT (OREDA, 2002)
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
35/84
Partial and imperfect testing of SIF Theoretical framework
35
Two common types of XMT are the conventional (dual bore) and horizontal. One of the main
advantages for choosing a conventional XMT is that the tree can be retrieved withoutremoving the tubing hanger and the tubing (Sangesland, 2007). In a horizontal XMT thediameter of the tubing can be larger and the tubing can be replaced without retrieving the
tree. The stack-up height of the XMT including BOP may otherwise be difficult to handle on a
conventional drilling vessel. On the other hand, retrieval of the tree implies retrieval of thetubing. Illustrations of both the horizontal and conventional tree are in Annex A. In Figure 9is an example of a horizontal XMT.
Figure 9, Horizontal XMT (AKS, 2007)
The gate valve is normally preferred as safety valve compared to a ball valve. The ball valve
requires a rotation force in addition to vertical movement. Gate valves normally have a lowerinternal leakage, and a shorter stem travel. It is assisted by pressure in the bore cavity
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
36/84
Partial and imperfect testing of SIF Theoretical framework
36
pushing the stem out of the valve cavity when closing. The actuator spring is also designed to
close also without pressure in the system. In Figure 10 a gate valve is shown in closed (left)and open (right) position. Note that this is only one of many solutions.
Figure 10, Gate valve with actuator (Ring-O, 2007)
Note that for fail safe gate valves the hole in the gate is on the upper part, thus when the valveis de-energized it will shift upwards and close. The valve will close whenever loss of electric
and/or hydraulic power is detected.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
37/84
Partial and imperfect testing of SIF Theoretical framework
37
2.5 Testing of the SIS’ abi l i ty to perform the SIF
IEC 61508 stresses the importance of considering the testability already in the design phaseof a system. In order to maintain the required SIL it is necessary to perform tests to assure
that the equipment is working as desired. If safety system devices are not tested thedangerous failures reveal themselves when a process demand occurs, often resulting in the
unsafe event that the safety system was designed to prevent (Summers, 2000b). Such testsare performed both before and during installation, but the testing during the production lifeis of great importance. Several methods and philosophies exist on this matter today.
Proof test is a “periodic test performed to detect failures in a safety-related system so that, ifnecessary, the system can be restored to an “as new” condition or as close as practical to this
condition” (IEC 61508, 2002). As a proof test requires production shut-down, othermeasures has been introduced that offer online testing; diagnostic tests and partial stroke
tests. The logic solver in the SIS is often programmable, and may carry out diagnostic self-testing during operation. That is done by the logic solver sending frequent signals to thedetectors and to the actuating items, and compares the responses with predefined values(Rausand & Høyland, 2004). Since there is no explicit definition of diagnostic testing in IEC61508, the interpretations of Velten-Philipp and Houtermans (2004), are used; “a test is a
diagnostic test if it fulfils the following three criteria;1. It is carried out automatically (…) and frequently (…).2. The test is used to find failures that can prevent the safety function from being
available.
3. The system automatically acts upon the results of the test.
Partial stroke testing (PST) is, when implemented, normally applied on valves. The test isconducted by simply stroking the valve to check that it is not stuck and thus revealing hiddendangerous failures. It is not done as frequently as diagnostic testing and dependent of the
chosen system it is neither performed automatic. Hence PST does not fulfil the criteria for being classified as a diagnostic test.
A function test is not defined by the IEC 61508/61511 standards, but for a valve it often
implies a full stroke test. This can be interpreted as the function test simply confirms that the
valve can close, not if it seals. Since the standard defines proof tests as a measure for restorethe system to as new condition, reducing the unavailability to zero, it must be assumed thatsuch tests embrace a wider range of test-methods in order to be capable of discover all thefailure modes. It is noticed that it is seldom distinguished clearly between proof testing and
function testing in the literature.
The proof testing is only done at certain intervals since it demands full shutdown of theproduction. Yet, this test is important because it reveals some failure modes that can not be
detected through the diagnostic self-testing or PST. The diagnostic coverage (DC), which isthe fraction of dangerous failures that are discovered by diagnostic testing relative to the total
number of dangerous failures (adapted from IEC 61508), may differ dependent on the system
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
38/84
Partial and imperfect testing of SIF Theoretical framework
38
in question and the chosen test approach. The proof test itself may however also be
incomplete and may be considered as an imperfect test. This is further elaborated in the nextchapter. In Figure 11 the relationship between the time concepts and tests is given. Rausand& Høyland (2004) give a thorough description of these concepts.
A- The system is taken downin order to perform a fullproof test.
B- The system is alreadydown when the full prooftest is performed andreveals the dangerousundetected failure.
C- Diagnostic or partial
testing reveals the failure before the scheduled prooftest, thus reducing thetime of the undetected
dangerous failure.
Figure 11, Up and down time related to tests
It has been developed a high level of diagnostic coverage for the sensors and logic and with
redundancy it has been succeeded to reduce the contribution to the PFD (Metso automation,2002), leaving the greatest contributor to be the actuating items/final elements.
Because of the disturbances testing impose on the production, the risks associated with the
testing itself and restarting after the test is finalized, it is preferred to test as seldom aspossible. Hence it is a need to optimize the test intervals to maintain both the safety andproduction interests. That is to assure as high safety availability as possible withoutintroducing additional production downtime. The causes and consequences of imperfect
testing, and partial stroke testing is further discussed in chapter 3 and 4.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
39/84
Partial and imperfect testing of SIF Imperfect testing
39
3 Imperfect testing
In the IEC 61508 standard it is assumed that after a proof test the component is “as good as
new”. For the proof test to be fully effective this means that it is necessary to detect 100 % ofall dangerous failures, reducing the unavailability to zero. This may not be feasible. Animperfect test situation may be defined as a situation where the test does not discover alldangerous failures and subsequently component unavailability remains. It can be claimedthat there are two possible classifications of an imperfect test situation;
1. The test does not cover all possible failures – inadequate test method.2. The test does not detect all the failures – unsuccessful test.
Hence the function test, the PST and the diagnostic test can all be classified as an imperfecttest since they do not cover all failure modes while a proof test may be imperfect due to
unsuccessful testing.
Since the focus in this thesis is upon testing it is assumed that as long as all failures arediscovered they can be repaired to an “as good as new” condition. Analogue to the definitionof imperfect testing in this thesis, imperfect repair can be defined as the situations where the
fault is not repaired perfectly or that the failure is chosen not to be repaired, as well as lack ofan adequate method for repairing the component. This can be the case when for example theleakage is considered minimal, or the repair of a somewhat delayed operational time (DOP) ispostponed until it is more significant. Rausand & Høyland (2004) give an introduction to
imperfect repair processes.
This uncertainty related to the test quality is not included in the reliability calculations, and isneither discussed much in the IEC 61508/61511 standards nor literature in general. IEC61508-6 mention briefly the effects of a non-perfect proof test in annex A (informative only).
This topic is elaborated in the following, both the possible causes for imperfect testing and itsimpact on the PFD.
3.1 Causes for an imperfect test
The PDS method (2006) claims that there are three main contributors to the loss of safety;unavailability due to dangerous undetected failures (consists of random hardware failures),unavailability due to systematic failures and unavailability due to known or planned
downtime.
Planned downtime is of no significance in this context, but it is of great interest to assess thereasons why random hardware failures and systematic failures are not discovered through
testing. A reason why failures are not discovered could be because the instruments needed to be able to confirm the test may not exist. Another reason could be that the company wants toavoid putting stress onto the system, thus instead of slamming the SCSSV shut as in a real
situation, the test could be modelled as controlled closing by closing the PWV in order to
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
40/84
Partial and imperfect testing of SIF Imperfect testing
40
create a static well. The fishbone diagram in Figure 12 gives additional causes for imperfect
testing of safety valves subsea.
Figure 12, Causes for imperfect testing of subsea safety valves
As described in Figure 12 the reasons for imperfect testing can be related to the attributes;methods, materials, machines, milieu and manpower. The attribute ‘materials’ cover the test
equipment, ’methods’ the procedure and formalities around the testing, ‘machines’ the
subsea-system itself, ‘milieu’ describes the context of the system and ‘manpower’ themanagers and workers conducting the tests. As illustrated it is obvious that the humaninterference is an important reason for imperfect testing.
There are no data collected for the proportion of tests that can be claimed to be imperfect. A
possible method for estimating the contribution of each of the M–factors described above isproposed in 3.2.2.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
41/84
Partial and imperfect testing of SIF Imperfect testing
41
3.2 Effects of an imperfect test
An imperfect test influences the estimation of the PFD since the overall unavailability of thesystem will be higher than when perfect testing is assumed. Three different interpretations of
this situation are described in the following. Depending on the assumptions taken, the PFDcan be expected to increase with a constant quantity over the interval, increase continually or
decrease.
• Case A – a constant PFD addition (due to systematic failures not revealed)
• Case B – an increasing PFD addition (due to random hardware failures not revealed)
• Case C – a decreasing PFD additionCase C depends on which approach is chosen, case A or case B, and is split into C1 and C2.
Using the simple SIS shown in Figure 2, with failure rates from OLF 070 guideline (2004) asstated in Table 4, the impact of the imperfect tests on the PFD is estimated for the three
approaches. In order to do this, the reliability block diagram is a good tool for describing thesystem. Note that the logic has a 1oo2 configuration. The sensors have 2oo3 redundancy, andthe final elements have 1oo2 redundancy.
Figure 13, Reliability Block Diagram of a simple SIS
Table 4, Data for the system test example
Component
Failure rate & DU
( 1−hours )
(OLF 070)
Common causefailure; - factor(PDS Data, 2006)
Test interval! hours
PFD under perfecttesting assumptions
( 1−hours )
Sensors(2oo3)
7
100.3
−⋅
3% 8760 51094.3
−⋅
Logic
(1oo2)
7100.1 −⋅ 2% 8760 61001.9 −⋅
Valve(1oo2)
7100.1 −⋅ 2% 8760 61001.9 −⋅
System 51074.5 −⋅
This result corresponds to a SIL 4 classification for the system. No diagnostic testing is
assumed for the sensors or logic in this example. The time required to test and repair the
items is considered negligible. This may make the result higher than genuine field results.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
42/84
Partial and imperfect testing of SIF Imperfect testing
42
Note that HFT and SFF requirements are not considered in the case examples, only the PFD
requirement for the SIL rating is assessed.
To determine the PFD impact of an imperfect test is dependent of the type of failure that
remains undetected. It is only the dangerous undetected failures that should be included in
the calculations. Using the simplified equations for the PFDs, the PFD for the system is:
AI L DSYS PFD PFD PFD PFD ++=
10022132 PFD PFD PFD PFD
ooooSYS ++=
[ ] [ ] [ ]
23
)1(
23
)1(
2)1(
222 τβλτλβτβλτλβτβλ
τλβ DU DU DU DU DU DU SYS PFD +−
++−
++−=
= common cause failure factor! = test interval & DU = dangerous failure rate
These simplified equations can be used when & DU! is small (
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
43/84
Partial and imperfect testing of SIF Imperfect testing
43
3.2.1 Case A, constant PFD addition
PDS interpretation of imperfect testing is covered through the concept of Critical SafetyUnavailability (CSU). CSU consists of the both the PFD as well as the contribution from(Systematic) Test Independent Failures (TIF). The PDS method is in line with IEC 61508, butdiffers regarding quantification of systematic failures. This contribution is not quantified in
the IEC 61508 standard, as it may vary from each application. But it is argued in the PDSHandbook (2006) that there are several reasons why this should be included.One reason is to reflect the actual risk related tothe operation, another is that failure rates are
based on historic data, thus the rates alreadyinclude the systematic failures to a certain extent.
A third argument is that the systematic failuresmay be the dominant contributor and should not
be excluded, and last that the measures againstsystematic failures should be reflected in thequantitative failure rate estimate. This approach issupported by the OLF 070 guideline (2004). As
illustrated in Figure 14, downtime due to testingand repair (DTUT and DTUR ) also contributes tothe safety unavailability; but these do notcontribute to the CSU.
Figure 14, Contribution to
unavailability (PDS method, 2006)
The definition of PTIF is “The probability that the module/system will fail to carry out its
intended function due to a (latent) systematic failure not detectable by functional testing(therefore the name “test independent failure”)”. The PTIF is assumed to be constantthroughout the lifetime, and for extended testing (proof testing) of the valves, a value of
PTIF= 5100.1 −⋅ 1−hours is suggested. PDS does not give any details for this choice of value.
The difficulty with detecting systematic failures is an example of an imperfect test due toinadequate test methods. This gives the following equation:
TIF
P PFDCSU +=
This situation compared with the IEC 61508 approach is illustrated in Figure 15.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
44/84
Partial and imperfect testing of SIF Imperfect testing
44
Figure 15, Sketch of the PFD impact with PTIF addition (case A)
With failure rate 7100.1 −⋅= DU λ1−hours and PTIF= 5100.1 −⋅ , the unavailability for one
component for a lifecycle of 20 years is illustrated in Figure 16.
Unavailability Case A
0
0,0002
0,0004
0,0006
0,0008
0,001
0,0012
0
1 6 3 0
0
3 2 6 0
0
4 8 9 0
0
6 5 2 0
0
8 1 5 0
0
9 7 8 0
0
1 1 4 1 0 0
1 3 0 4
0 0
1 4 6 7
0 0
1 6 3 0
0 0
Hours
( t
)
Unavailability
SIL 2PFD average
PFD avg perfect test
Figure 16, Unavailability under imperfect test condition case A
For this exact example the PFD for the imperfect test situation is 41048.4 −⋅= PFD while a
perfect test yields the result 41038.4 −⋅= PFD and a difference of 5100.1 −⋅ , which is the PTIF
addition. This addition does not lead to a change of the SIL rating for the component as thePTIF is small.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
45/84
Partial and imperfect testing of SIF Imperfect testing
45
Result system test example
With the data given in Table 4, the PFD of the test-system is described below. Note that with
a 1oo2 configuration for the safety valves, the TIF contribution becomes “ TIF P ⋅β ”. C1oo2 is the
PDS configuration factor for the valve configuration.
TIF SYS PDS SYS P PFDCSU PFD +== _
[ ] [ ] [ ]
3
)1(
23
)1(
2)1(
222
_
τλβτβλτλβτβλτλβ DU DU DU DU DU PDS SYS PFD
−++
−++−=
TIF oo DU P C ⋅⋅++ β
τβλ21
2
545
_ 1094.5100.102.00.11074.5 −−− ⋅=⋅⋅⋅+⋅= PDS SYS PFD
This result differs marginally from the perfect testing situation since the PTIF addition isrelatively small. It does not lead to a change in the SIL-rating.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
46/84
Partial and imperfect testing of SIF Imperfect testing
46
3.2.2 Case B, Increasing PFD addition
The second interpretation of imperfect testing is that there is an increasing PFD addition overthe life span of the component due to unsuccessful proof testing, meaning that theunavailability is not reduced to zero after the test is conducted. This leads to a shift upwardsfor the PFD since the overall unavailability of the system will be higher than when perfect
testing is assumed as illustrated in Figure 17. Note that the unavailability starts at zero as it isassumed that the condition of the component is perfect.
Figure 17, Sketch of the PFD impact with imperfect test addition (case B)
As a basis for calculating the PFD impact of imperfect testing, a component is divided into aseries structure where one part is “non-testable” and the remaining part “testable”. In order
to function, both parts of the component have to function.
Figure 18, Series structure when imperfect test of a component
The dangerous undetected failures rate is split into two parts dependent of the imperfect test
fraction, here named '.
T DU NT DU DU −− += λλλ
DU NT DU λαλ ⋅=−
DU T DU λαλ ⋅−=− )1(
When the test interval for the testable part is !=8760h, and the !NT =175200h corresponds to
the component life span of 20 years, the PFD for the component is described by:
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
47/84
Partial and imperfect testing of SIF Imperfect testing
47
22
)1( NT DU DU PFD τλατλα ⋅⋅
+⋅−
=
As the relation between PFD and the test interval is a linear function, it is reasonably that ashorter test interval leads to a smaller PFD. Because of the implications with proof testing,
shorten the test intervals is not a desired solution for achieving the required SIL. For thisreason the test interval is held fixed throughout the calculations.
In Figure 19 a sketch is drawn for the failure rate 7100.1 −⋅= DU λ1−hours over 20 years. It is
assumed that the non-testable part is '=20%. This means that 80% of the failure rate is set tozero after every proof test while the remaining 20% continues over the whole interval. Asshown in Figure 19 the non-testable part increasingly override the testable part of the system.
Unavailability Case B
0
0,0005
0,001
0,0015
0,002
0,0025
0,003
0,0035
0,004
0,0045
0
1 5 9 0
0
3 1 8 0
0
4 7 7 0
0
6 3 6 0
0
7 9 5 0
0
9 5 4 0
0
1 1 1 3 0
0
1 2 7 2 0
0
1 4 3 1 0
0
1 5 9 0 0
0
1 7 4 9 0
0
Hours
( t )
Unavailability
SIL 2
Unavail. Perfect
testing
PFD average
PFD avg perfect test
Figure 19, Unavailability under imperfect test condition case B
The equation for this situation is as follows:
2)1(
2
PT DU NT DU PFD τλ
ατλ
α ⋅−+⋅≈
377
1010.2
2
8760100.1)2.01(
2
876020100.12.0
−−−
⋅=⋅⋅
⋅−+⋅⋅⋅
⋅≈ PFD
Note that the PFD average is actually an average of average in order to illustrate the possible
change in the SIL rating. For this exact example the PFD for the imperfect test situation is3
1010.2 −⋅= PFD while a perfect test yields the result 41038.4 −⋅= PFD which gives a
difference of 31066.1 −⋅ .
The PFD impact by different combinations of the failure rate and percentage non-testable are
given in Table 5. The unavailability is calculated for different failure rates, and the range from8
100.1 −⋅= DU λ
1−hours till 6100.1 −⋅= DU λ1−hours is chosen as this interval reflects the
failure rate of a PMV in subsea XMT. The non-testable part ranges from 10% till 90% in thecalculations. For convenience it is assumed that the same failures remain undetected during
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
48/84
Partial and imperfect testing of SIF Imperfect testing
48
the whole lifetime. For illustrating the PFD development the accumulated PFD is shown for
year 1, 10 and 20.
Table 5, Unavailability at time t of a single component under imperfect test conditions
Failure rate component (1−
hours )% imperfect
test Years8
100.1 −⋅= DU λ
7100.1 −⋅= DU λ
6100.1 −⋅= DU λ
0 % 15
1080.8 −⋅ 41080.8 −⋅ 31076.8 −⋅
105
1070.8 −⋅ 41070.8 −⋅ 31066.8 −⋅
205
1070.8 −⋅ 41070.8 −⋅ 31066.8 −⋅
10% 15
1080.8 −⋅ 41080.8 −⋅ 41070.9 −⋅
104
1066.1 −⋅ 31066.1 −⋅ 21066.1 −⋅
204
1054.2 −⋅ 31054.2 −⋅ 21053.2 −⋅
20% 1 51080.8 −
⋅ 41080.8 −
⋅ 31084.1 −
⋅ 10
41046.2
−⋅ 31046.2 −⋅ 21044.2 −⋅
204
1022.4 −⋅ 31021.4 −⋅ 21015.4 −⋅
30% 15
1080.8 −⋅ 41080.8 −⋅ 31070.2 −⋅
104
1025.3 −⋅ 31025.3 −⋅ 21021.3 −⋅
204
1089.5 −⋅ 31088.5 −⋅ 21075.5 −⋅
40% 15
1080.8 −⋅ 41080.8 −⋅ 31057.3 −⋅
104
1005.4 −⋅ 31004.4 −⋅ 21098.3 −⋅
204
1056.7 −⋅ 31054.7 −⋅ 21032.7 −⋅
50% 1
5
1080.8
−
⋅ 4
1080.8
−
⋅ 3
1044.4
−
⋅ 10
41084.4
−⋅ 31083.4 −⋅ 21074.4 −⋅
204
1024.9 −⋅ 31020.9 −⋅ 21086.8 −⋅
60% 15
1080.8 −⋅ 41080.8 −⋅ 31078.8 −⋅
104
1063.5 −⋅ 31062.5 −⋅ 21050.5 −⋅
203
1009.1 −⋅ 21009.1 −⋅ 11004.1 −⋅
70% 15
1080.8 −⋅ 41080.8 −⋅ 31078.8 −⋅
104
1043.6 −⋅ 31041.6 −⋅ 21024.6 −⋅
203
1026.1 −⋅ 21025.1 −⋅ 11019.1 −⋅
80% 15
1080.8 −⋅ 41080.8 −⋅ 31077.8 −⋅
104
1022.7 −⋅ 31020.7 −⋅ 21098.6 −⋅
203
1043.1 −⋅ 21042.1 −⋅ 11033.1 −⋅
90% 15
1080.8 −⋅ 41080.8 −⋅ 31077.8 −⋅
104
1001.8 −⋅ 31099.7 −⋅ 21071.7 −⋅
U n a v a i l a b i l i t y a t t i m e t w i t h i m p e r f e c t t e s t i n g ( h o u r s - 1 )
203
1059.1 −⋅ 21058.1 −⋅ 11047.1 −⋅
In Table 6 the average differences between perfect testing results and the imperfect testsituation is given. The imperfect test situation yields higher average PFDs than with perfect
testing.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
49/84
Partial and imperfect testing of SIF Imperfect testing
49
Table 6, PFD average differences between perfect and imperfect tests
Failure rate ( 1−hours )% Non-testable 8100.1 −⋅ 7100.1 −⋅ 6100.1 −⋅
10 51032.8 −⋅ 41032.8 −⋅ 31032.8 −⋅
20
4
1066.1
−
⋅
3
1066.1
−
⋅
2
1066.1
−
⋅ 30 41050.2 −⋅ 31050.2 −⋅ 21050.2 −⋅
40 41033.3 −⋅ 31033.3 −⋅ 21033.3 −⋅
50 41016.4 −⋅ 31016.4 −⋅ 21016.4 −⋅
60 41099.4 −⋅ 31099.4 −⋅ 21099.4 −⋅
70 41083.5 −⋅ 31083.5 −⋅ 21083.5 −⋅
80 41066.6 −⋅ 31066.6 −⋅ 21066.6 −⋅
90 41049.7 −⋅ 31049.7 −⋅ 21049.7 −⋅ PFD averagedifferences
(PFDdiff )
41016.4
−⋅ 31016.4 −⋅ 21016.4 −⋅
As illustrated in Figure 20 the impact is greater when the failure rate is getting higher. For a
component with failure rate of 6100.1 −⋅= DU λ1−hours and a high percentage of non-testable
failures could potentially lead to a change of the SIL rating as the result would tend to go tothe outer limit of the classification. Often the SIL is required by the client to be in themidpoint of the range.
Unavailability
0,00000
0,01000
0,02000
0,03000
0,04000
0,05000
0,06000
0 10 20 30 40 50 60 70 80 90
% Non-testable
P F D d
i f f 10^-6
10^-7
10^-8
Figure 20, Unavailability for different failure rates under imperfect testing
Based on the PFD average differences given in Table 6, special care should be shown in cases with high failure rates for the valves, while for the lower failure rates the impact is notconsidered as critical if the SIL requirement is low. If imperfect tests are to be included in the
calculations, refer to Table 7 for knowing when this topic should be given attention.
8/18/2019 IMPERFECT & Partial Valve Stroke testing impact on SIF
50/84
Partial and imperfect testing of SIF Imperfect testing
50
Table 7, Matrix for SIL rating sensitivity due to imperfect testing
Imperfect test addition
PFDdiff 4
1016.4 −⋅ 31016.4 −⋅ 21016.4 −⋅
Failure
rate valves
( 1−hours ) 8100.1
−⋅= DU λ 7100.1
−⋅= DU λ 6100.1
−⋅= DU λ
4
3
2 S I L
1
Red The inclusion of imperfect test addition leads to a change of SIL-rating
Yellow Dependent of the exact PFD calculation the inclusion of imperfect test
addition could lead to a change of SIL-rating
Green The inclusion of imperfect test addition will not have an impact on the SIL-rating
As discussed at the beginning of this chapter it may be hard to assess the exact percentagethat remains untested after a proof test, hence using the imperfect test addition as proposedin Table 7 ensures that conservative estimates are made.
Note that these imperfect test additions are done for one component only, and for differentarchitectures they need to be modified. For the system in question where the valves have a1oo2 configuration, the PFDdiff shouled be modified analogous to the PDS method (2006);
diff oodiff PFD PFD ⋅= β21 _
However, it has been argued (Bak, 2007) that constant factors as such are not motivatingseen from a safety designers’ point of view since improvements in the design will never leadto a quantitatively change. A simple solution is to perform a weighing of each of the M-factors
described in section 3.1 and then adjust the imperfect test additions in Table 7 dependent ofeach specific case.
Fig