13
Validation test case generation based on safety analysis ontology Chin-Feng Fan a,, Wen-Shing Wang a,b a Dept. of Computer Engineering and Science, Yuan-Ze U., Taiwan, ROC b Nuclear Engineering Division, Institute of Nuclear Energy Research, Taiwan, ROC article info Article history: Received 3 August 2011 Received in revised form 30 January 2012 Accepted 5 February 2012 Available online 22 March 2012 Keywords: Validation test Domain-specific ontology Automatic test case generation Safety Analysis Report (SAR) abstract Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Anal- ysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant infor- mation was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its sys- tematic rather than ad hoc test case generation from a SAR to achieve high test coverage. Ó 2012 Elsevier Ltd. All rights reserved. 1. Introduction Validation tests are important for ensuring that a system meets the original user needs; thus, validation test cases should reflect user expectations in the working environment. However, the vali- dation tests currently used in many application domains are de- signed according to requirement specifications or system models, which mainly represent the engineer perspective instead of the user perspective. This situation can produce hazardous results in safety–critical domains. Examples include the failures of Ariane 5 (Dowson, 1997; Jézéquel and Meyer, 1997; Lions, 1996) and Patriot missiles during the Gulf War (Falatko, 1991). The root cause of the Ariane 5 explosion (Dowson, 1997; Jézéquel and Meyer, 1997; Lions, 1996) can be traced to a piece of unneeded software code, whose operation was prolonged by the user for 50 s to avoid a hold in countdown. This prolonged usage triggered an exception message which was later misinter- preted by the on-board computer as adjustment data, and thus led to the crash. On the other hand, the root cause of the Patriot Missile failure (Falatko, 1991), which killed 28 soldiers during the Gulf War, was a problem in the non-terminating binary expan- sion of a decimal value (1/10) for time. The US army had been ad- vised to reset the system within 8 h; whereas on that day the missile battery had been operated continuously for over 100 con- secutive hours. Both accidents could have been avoided if valida- tion tests had tested the actual user needs and habits. To avoid such failures in safety–critical domains, system valida- tion tests should be based directly on the original user needs. In the nuclear power industry, the user safety needs are addressed in a Safety Analysis Report (SAR). Current digital Instrumentation and Control (I&C) regulations (BTP 14) prescribe that the testing pro- cess for a digital I&C system should include unit, integration and validation phases (NRC, 2007). However, the US Nuclear Regula- tory Commission (NRC) has only issued a regulatory guide (RG 1.171) for unit testing phase (NRC, 1997). No regulatory guides are available for integration and validation testing process. Since validation tests in the nuclear industry are typically derived from accident analysis results in an ad hoc fashion, test coverage is uncertain and often inadequate. This study presents a technique for automatically generating validation test cases according to the original safety needs of users, which are indicated in a SAR. After a domain-specific ontology for the SAR is designed, its asso- ciated test coverage criteria are then proposed. After the ontology is used to mark up a Safety Analysis Report, relevant information is extracted automatically to generate system validation test cases that meet the coverage criteria. A computerized toolset was also developed for automating the mark-up and test generation processes. Thus, the contributions of this research are twofold: (1) Compared to the validation tests derived from requirement specifications or from system models typically used in many application domains, the SAR-based validation tests better meet user needs. 0306-4549/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.anucene.2012.02.001 Corresponding author. Address: Dept. of Computer Engineering and Science, Yuan-Ze U., 135 Far East Road, Chung-Li 320, Taiwan, ROC. Tel.: +886 3 4638800x2360; fax: +886 3 4638850. E-mail addresses: [email protected] (C.-F. Fan), [email protected] (W.-S. Wang). Annals of Nuclear Energy 45 (2012) 46–58 Contents lists available at SciVerse ScienceDirect Annals of Nuclear Energy journal homepage: www.elsevier.com/locate/anucene

Validation test case generation based on safety analysis ontology

Embed Size (px)

Citation preview

Page 1: Validation test case generation based on safety analysis ontology

Annals of Nuclear Energy 45 (2012) 46–58

Contents lists available at SciVerse ScienceDirect

Annals of Nuclear Energy

journal homepage: www.elsevier .com/locate /anucene

Validation test case generation based on safety analysis ontology

Chin-Feng Fan a,⇑, Wen-Shing Wang a,b

a Dept. of Computer Engineering and Science, Yuan-Ze U., Taiwan, ROCb Nuclear Engineering Division, Institute of Nuclear Energy Research, Taiwan, ROC

a r t i c l e i n f o

Article history:Received 3 August 2011Received in revised form 30 January 2012Accepted 5 February 2012Available online 22 March 2012

Keywords:Validation testDomain-specific ontologyAutomatic test case generationSafety Analysis Report (SAR)

0306-4549/$ - see front matter � 2012 Elsevier Ltd. Adoi:10.1016/j.anucene.2012.02.001

⇑ Corresponding author. Address: Dept. of CompuYuan-Ze U., 135 Far East Road, Chung-Li 320,4638800x2360; fax: +886 3 4638850.

E-mail addresses: [email protected] (C.-F(W.-S. Wang).

a b s t r a c t

Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. Thisstudy presents a systematic and objective method of generating validation test cases from a Safety Anal-ysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant infor-mation was then extracted from the marked-up document for use in automatically generating validationtest cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use casecoverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its sys-tematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

� 2012 Elsevier Ltd. All rights reserved.

1. Introduction

Validation tests are important for ensuring that a system meetsthe original user needs; thus, validation test cases should reflectuser expectations in the working environment. However, the vali-dation tests currently used in many application domains are de-signed according to requirement specifications or system models,which mainly represent the engineer perspective instead of the userperspective.

This situation can produce hazardous results in safety–criticaldomains. Examples include the failures of Ariane 5 (Dowson,1997; Jézéquel and Meyer, 1997; Lions, 1996) and Patriot missilesduring the Gulf War (Falatko, 1991).

The root cause of the Ariane 5 explosion (Dowson, 1997;Jézéquel and Meyer, 1997; Lions, 1996) can be traced to a pieceof unneeded software code, whose operation was prolonged bythe user for 50 s to avoid a hold in countdown. This prolongedusage triggered an exception message which was later misinter-preted by the on-board computer as adjustment data, and thusled to the crash. On the other hand, the root cause of the PatriotMissile failure (Falatko, 1991), which killed 28 soldiers duringthe Gulf War, was a problem in the non-terminating binary expan-sion of a decimal value (1/10) for time. The US army had been ad-vised to reset the system within 8 h; whereas on that day themissile battery had been operated continuously for over 100 con-

ll rights reserved.

ter Engineering and Science,Taiwan, ROC. Tel.: +886 3

. Fan), [email protected]

secutive hours. Both accidents could have been avoided if valida-tion tests had tested the actual user needs and habits.

To avoid such failures in safety–critical domains, system valida-tion tests should be based directly on the original user needs. In thenuclear power industry, the user safety needs are addressed in aSafety Analysis Report (SAR). Current digital Instrumentation andControl (I&C) regulations (BTP 14) prescribe that the testing pro-cess for a digital I&C system should include unit, integration andvalidation phases (NRC, 2007). However, the US Nuclear Regula-tory Commission (NRC) has only issued a regulatory guide (RG1.171) for unit testing phase (NRC, 1997). No regulatory guidesare available for integration and validation testing process. Sincevalidation tests in the nuclear industry are typically derived fromaccident analysis results in an ad hoc fashion, test coverage isuncertain and often inadequate. This study presents a techniquefor automatically generating validation test cases according tothe original safety needs of users, which are indicated in a SAR.After a domain-specific ontology for the SAR is designed, its asso-ciated test coverage criteria are then proposed. After the ontologyis used to mark up a Safety Analysis Report, relevant information isextracted automatically to generate system validation test casesthat meet the coverage criteria. A computerized toolset was alsodeveloped for automating the mark-up and test generationprocesses.

Thus, the contributions of this research are twofold:

(1) Compared to the validation tests derived from requirementspecifications or from system models typically used in manyapplication domains, the SAR-based validation tests bettermeet user needs.

Page 2: Validation test case generation based on safety analysis ontology

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 47

(2) The proposed ontology-based test case generation approachproduces better test coverage compared to the ad hocapproach currently used in the nuclear power domain.

2. Background and related work

2.1. Ontology

The term ontology originates from philosophy. Ontology (Hof-webwer, 2004) is a formal representation of a set of concepts ina domain and the relations among these concepts. The widelyvarying applications of ontology include artificial intelligence,knowledge engineering, and software engineering. The ontologycomponents usually include individuals (instances), classes (con-cepts), attributes, and relations. Ontology models the structure ofshared knowledge in a domain. For example, Fig. 1 shows a simpleontology with classes including ‘‘Car’’, ‘‘Sedan’’, ‘‘Sports Car’’,‘‘Door’’, and ‘‘Wheel’’ along with relationships ‘‘is-a’’ and ‘‘has-a’’.Currently ontology has representation languages and authoringtools (Protégé, 2011).

2.2. EXtensible Markup Language (XML)

EXtensible Markup Language (XML), produced by World WideWeb Consortium (W3C), is a markup language encoding docu-ments in machine-readable form (W3C, 2008). XML is a simplifiedversion of the Standard Generalized Markup Language(SGML)(Goldfarb, 1990). XML allows users to design their own tagsfor data exchange. A DTD (Document Type Definition) can be de-fined to show the legal building blocks of an XML document in agrammatical format. A DTD consists of elements, attributes, enti-ties, and comments. For example, Fig. 2 shows a DTD for an E-mail,in which the fields are ‘‘from’’, ‘‘date’’, ‘‘to’’, ‘‘subject’’, and ‘‘body’’,in that order. Each part is declared as PCDATA (Parsed CharacterDATA). Regular expression (Hopcroft et al., 2000) notations areused in the DTD. For instance, in Fig. 2 the plus sign (+) indicatesthe presence of one or more of the preceding elements; the ques-tion mark (?) indicates that one or none of the preceding elementsis present.

Several domain-specific markup languages are available, suchas Chemical Markup Language (CML) (Murray-Rust and Rzepa,1995), Mathematic Markup Language (Ion and Miner, 1999), andour Safety Markup Language (SML) (Fan and Yih, 1999). This re-search designed a novel Safety Analysis Markup Language (SAML)for a SAR.

Car

Sedan Door Sports Car Wheel

Is-a Is-a Has-aHas-a

Fig. 1. Ontology example.

<!ELEMENT email (from,date,to+,subject,body?)> <!ELEMENT from (#PCDATA)> <!ELEMENT date (#PCDATA)> <!ELEMENT to (#PCDATA)> <!ELEMENT subject (#PCDATA)> <!ELEMENT body (#PCDATA)>

Fig. 2. A DTD example.

2.3. Automatic test case generation

This section describes test case generation for software ingeneral.

Current techniques for automatically generating test cases aregenerally model-based. For example, test case generation basedon UML diagrams (e.g., state diagrams, sequence diagrams, and col-laboration diagrams) have received significant attention (Offutt andAbdurazik, 2000; Prasanna et al., 2005). Researchers are also study-ing specification-based test case generation, such as that based onformal specifications and state machines (Offutt et al., 2003).

Most of the ontology-based test case generation methods re-ported in the literature use test knowledge ontology instead of do-main-specific ontology. One example is the method developed byNasser (2009), which used state transition diagrams, and gener-ated tests based on a predefined test knowledge ontology.

Studies of test case generation based on domain-specific ontol-ogy are relatively rare. In Bai et al. (2008), for example, a web-based service for search hotels defined the hotel search ontologyas well as test knowledge ontology to generate test cases. Their in-put data were derived from databases whereas the input data inthis study were derived from user documents.

Compared to current ad hoc approaches to validation test casegeneration in the nuclear domain, our approach has the noveltyin systematically deriving test cases from user needs; moreover,the generated tests satisfy the proposed test coverage criteria.

2.4. Safety analysis reports

Safety Analysis Reports include the Preliminary Safety AnalysisReport (PSAR) and the Final Safety Analysis Reports (FSARs), whichare needed for licensing review in the nuclear domain. A safetyanalysis report states the safety commitment made by the powercompany before system requirement specifications are estab-lished. Thus, a SAR addresses the original safety needs of the user.

A power company must submit a PSAR to regulatory authoritiesto obtain a construction permit for a new nuclear power plant. APSAR must conform to safety regulations of NUREG-0800 (NRC,2007) and RG 1.70 (NRC, 1978). This study uses Chapter 15 ofthe PSAR for the Taiwan Lung-Men power plant (Taiwan PowerCompany, 2005) as a case study. Chapter 15 analyzes whetherthe behavior of the control system and safety system conformsto regulatory requirements when PIEs occur. Eight categories ofsafety events are considered in this case study; examples of theseevents include decrease in reactor coolant temperature, increase inreactor pressure, and decrease in reactor coolant system flow rate.

2.5. Current validation test practices in the nuclear industry

Current nuclear digital I&C regulations have no regulatoryguides for integration and validation test processes. The validationtests currently performed in the nuclear industry are derived fromaccident analysis results. However, such approaches are generallyad hoc rather than systematic. Although details of these approachesare rarely released to the public, the practices observed in the KK6/KK7 power plant (Fukumoto et al., 1998) in Japan are publiclyavailable and can be summarized as follows.

The validation testing phase for the KK6/KK7 power plant eval-uated results for 665 RPS (Reactor Protection System) tests scenar-ios and 232 ESF (Engineer Safety Features) test scenarios, whichcovered the design-based transients and the experienced tran-sients in existing plants. For each transient, 10 tests were per-formed, which resulted in 8970 test cases. Test data are generallyrandomly selected. The random input test analyzes the responseto different combinations of random test signals sent to each offour divisions. These random tests were performed optionally as

Page 3: Validation test case generation based on safety analysis ontology

48 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

time permitted in the V&V test schedule. A total of 5240 tests werecarried out as random input tests. It took 20 days to perform 14210(8970 + 5240) tests for the dynamic transient tests and the op-tional random input tests (Fukumoto et al., 1998).

These facts revealed that the validation testing relied more onrandom testing rather than on a systematic approach. This studyproposes an ontology-based approach for defining test coveragecriteria and for systematically generating validation test casesbased on a SAR.

3. Proposed approach

We propose the following six steps to systematically generatevalidation test cases from a SAR:

(1) Develop a domain-specific ontology: First, a domain-specificontology based on relevant regulations must be designed forexplicit representation of the essential concepts and relationsin the examined domain. In our case, the SAR-related regula-tions are Chapter 15 of NUREG-0800 (Standard Review PlanSRP) and Regulatory Guide (RG)1.70. A domain-specific ontol-ogy for the SAR is first developed.

(2) Define ontology-based test coverage criteria: Test coverage crite-ria for validation tests should be defined according to thedomain ontology. We propose four types of test coverage cri-teria based on the SAR ontology; they are: single parametercoverage, use case coverage, abnormal condition coverage,and scenario coverage criteria.

(3) Design an XML markup language: A markup language in XML, aSafety Analysis Markup Language (SAML), representing theabove ontology is then be designed.

(4) Mark up user needs: A case study (an inputted SAR) is markedup using SAML tags, attributes, and links.

(5) Extract data from the marked-up document: Numerical data andrelated information are then extracted from the marked-updocument. These data can be stored in tables along with aknowledge table for domain specific figures, if needed.

1.Develop domain ontology

2. Define test coverage criteria

3. Design markup language SAML

4. Mark up a SAR

5. Extract data

6. Generate test cases

NUREG-0800RG1.70

Single parameterUse case

Abnormal conditionScenario test

XML TagsTag attributesTag relations

The marked-up PSAR of a case study

Predefined knowledgetable

Tag contentsAttribute values

Validation test cases

Fig. 3. Proposed steps and associated I/O data.

(6) Generate test cases: Validation test cases using the aboveextracted data are then generated.

These steps and their associated I/O data are shown in Fig. 3.

4. A case study using the proposed method

This section describes the details of the proposed approach,using Chapter 15 (Accident analysis) of Lungmen PSAR (TaiwanPower Company, 2005) as a case study. Moreover, a computerizedtoolset written in VB has been constructed to aid implementationof the ontology and to perform test case generation. The proceduralsteps are presented sequentially.

4.1. Develop domain ontology

The SAR-related regulations in the nuclear domain includeChapter 15 of NUREG-0800 (Standard Review Plan; SRP) and Reg-ulatory Guide (RG)1.70. A SAR describes the safety system reaction,potential scenarios, and possible impacts when the systemencounters a Postulated Initial Event (PIE). The RG1.70 specifiesthat a SAR should include the following data:

(1) Identification of causes and frequency classification.(2) Sequence of events and systems operation.(3) Core and system performance.(4) Barrier performance.(5) Radiological consequences.

Therefore, we propose organizing these contents according tothe life cycle of an accident scenario along with its safety analysis.Specifically, the stages of the safety analysis in the proposed modelare initial, abnormal, transient, evaluation, and influence (Fig. 4). Themodel indicates that safety analysis for a PIE includes, in sequentialorder, the initial system state, the abnormal event occurrence, thetransient state sequence including safety system reaction and pos-sible scenarios, the evaluation of barrier and radiological conse-quences, and an analysis of this event influence on the powerplant. These five stages comprise the major contents described ina SAR. Detailed information for each stage is proposed as follows:(note that terms in italics denote major concepts, and terms in cor-ner brackets indicate XML tags).

� Initial stage:(a) Event preconditions or assumptions.(b) Input parameters and initial conditions (<IPandIC>).� Abnormal stage for a PIE:

(a) Identification of causes.(b) Frequency classification.� Transient analysis stage:

(a) Sequence of event.(b) System operation.(c) Operator action.(d) Result.� Evaluation stage:

(a) Barrier performance.(b) Radiological consequence.� Event influence stage:

Event influences on the power plant.

These contents form the top two levels of the proposed SARontology depicted in Fig. 5. Concepts in the ontology will be repre-sented in XML. The information from the initial and abnormalstages are later used as test case inputs while the information from

Page 4: Validation test case generation based on safety analysis ontology

initial abnormal Transientanalysis

evaluation Event influence

Event occurrence

Event progress

Event evaluation

Impact analysis

Fig. 4. Proposed model for the safety analysis sequence in a SAR.

event

Transientanalysis

abnormal

initial

Eventinfluence

Evaluation

Eventassumption

IPandIC

…Identification

Of Cause

FrequencyClassification

PowerLevel

FeedwaterFlowRate

Design Pressure

Function

System

PlantPara

Process

Output

OperatorAction

System Operation

Sequence of Event

Result

Failure mode

Input

Upperbound

Lowerbound

Inputtype

Precision

MeasureUnit

Illegalrange

Legal Range

Outputtype

EventTime

EventInputPara

Low LimitUpper Limitunit

index

Description

prob

Fig. 5. Major structures of the proposed ontology.

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 49

the transient, evaluation, and influence stages are used as test caseoutputs. Further details under each stage are elaborated andshown in Fig. 5.

4.2. Define test coverage criteria

The above ontology indicates that a SAR should describe howthe safety system reacts when a PIE occurs. Normally, sample sce-narios for major PIEs are described in a SAR. These sample scenar-ios, called as standard ‘‘use cases’’ in this paper, should bethoroughly tested. The variations of these scenarios as well asparameters mentioned in the scenarios should also be tested.The term ‘‘parameters’’ refers to devices and process variablesparticipating in transient or accident events described in the SAR.For example, pump state and power level are plant parameters.We propose the following ontology-based testing coveragecriteria:

(1) Single parameter coverage criterion: This criterion requiresthat each plant parameter in the described scenario is testedto ensure that the parameter functions normally.

(2) Use case coverage criterion: Each transient or accident eventdescribed in the SAR is evaluated in a sequence of steps frominitialization to stabilization along with the safety systemresponses. The use case coverage criterion ensures that eachdescribed transient or accident sequence is tested.

(3) Abnormal condition coverage criterion: Hypothetic softwareand hardware failures (i.e., abnormal conditions) in the aboveuse cases should also be tested. This criterion ensures thatabnormal conditions that might occur in the describedsequences are tested.

(4) Scenario coverage criterion: Non-standard use cases can begenerated by varying the selected parameter values in thestandard use cases. Thus, a large amount of scenarios thatare different from the events described in the SAR can beintroduced systematically. This criterion ensures that thesevariant scenarios are generated and tested using techniquessuch as equivalence portioning (Desikan and Ramesh, 2006),boundary checking, and heuristics.

These test coverage criteria facilitate systematic and goal-oriented test case generation.

Page 5: Validation test case generation based on safety analysis ontology

50 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

4.3. Designing a Safety Analysis Markup Language (SAML)

Ontology must be explicitly represented for practical use.Since the proposed SAR ontology was applied to mark up a doc-ument, an XML markup language – Safety Analysis Markup Lan-guage (SAML) was designed to represent the above SARontology. Fig. 6 shows the top-level SAML structure containingfive DTDs which represent concepts at each of the five stages,namely, initial, abnormal, transient, evaluation, and influencestages. Note that these five stages, which may appear more thanonce and in any order, are denoted by ‘‘OR’’ and a Kleene Closure

<?xml version="1.0" ?> <!ELEMENT AccidentAnalysis (Event+)> <!ELEMENT Event (Initial | Abnormal | TransientAnal<!ENTITY %Initial SYSTEM "Initial.dtd"> <!ENTITY %Abnormal SYSTEM " Abnormal.dtd"

…. <!ATTLIST Event name CDATA #IMPLIED>

Fig. 6. Top level struc

<?xml version="1.0" ?> <!-- Initial.dtd --> <!ELEMENT Initial (IPandIC | EventAssum <!ELEMENT IPandIC (ReactorVesselCool

FeedwaterFlowRa <!ELEMENT FeedwaterFlowRate (#PCD

.......

Fig. 7. DTD for the

Fig. 8. DTD for the a

(⁄) (Hopcroft et al., 2000) in the XML DTD. Figs. 7–9 show samplesof the contents of DTDs in the first three stages. The concepts rep-resented by the following XML tags are used in the subsequenttest case generation:

<IPandIC>: Initial Parameter and Initial Condition, which mayinvolve device states or process variables, such as pump stateand design pressure.<System>: normal and abnormal system behavior, consisting ofthe sub-concepts such as system function (<function>) andplant parameter (<PlantPara>).

ysis | Evaluation | EventInfluence | #PCDATA )*>

>

ture of the SAML.

e | #PCDATA)*> antLevel | PressurizerCoolantLevel te | DesignPressure | …|#PCDATA)*> ATA)>

initial stage.

bnormal stage.

Page 6: Validation test case generation based on safety analysis ontology

<?xml version="1.0" ?> <!-- TransientAnalysis.dtd --> <!ELEMENT TransientAnalysis (SequenceofEvent | SystemOperation | OperatorAction | Result |

#PCDATA)*> <!ELEMENT SequenceofEvent (ControlRodInsertion | SafetyValveSetPoint | ….| EventTime |

EventTransient | EventInputPara | #PCDATA)> <!ELEMENT EventInputPara (#PCDATA)>

<!ATTLIST EventInputPara id CDATA #IMPLIED> value CDATA #IMPLIED> unit CDATA #IMPLIED>

………..

Fig. 9. DTD for the transient stage.

Fig. 10. Our toolset.

Fig. 11. A markup language specification tool.

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 51

<input> or <output>: the input or output of a system function,which includes attributes such as ‘‘types’’, ‘‘ranges’’, ‘‘illegalranges’’, and ‘‘values’’ attributes.<PlantPara>: plant parameters involving device states or pro-cess variables, such as power level, and coolant inventory. Attri-butes include ‘‘low limit’’, ‘‘upper limit’’, and ‘‘unit’’.

<EventInputPara>: input parameters to a transient event. Theattributes of these parameters include ‘‘identifier’’, ‘‘value’’,and ‘‘unit’’.

A toolset for the proposed SAR-based test case generation hasbeen constructed. The toolset has two major parts:

(1) Tag design and markup tools.

Page 7: Validation test case generation based on safety analysis ontology

52 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

(2) Test case generation tools.

Fig. 10 shows the main screen of this toolset. The tag designtool adds, deletes, and saves tags in the markup language(Fig. 11).

Fig. 12. Part of the m

Table 1Part of the marked-up scenario table in the SAR.

Time (s) Events

<EventTime id = ’’ Runout of One Feedwater Pump’’index = ’’1’’> 0</EventTime>

<EventTransient id = ’’ Runoupump (at system design <Eveunit = ’’MPaG’’> pressure</EvFeedwater Pump’’, value = ’’7</EventTransient>

<EventTime id = ’’ Runout of One Feedwater Pump’’index = ’’2’’> �0.1</EventTime>

<EventTransient id = ’’ Runoufeedwater flow from the othe

Fig. 13. A tag

4.4. Mark up user needs

A SAR should then be marked up using the above domain-specific tags and attributes so that relevant data can be extractedfor further use in generating test cases. Our case study is Chapter

arked-up SAR.

t of One Feedwater Pump’’ index = ’’1’’>Initiate simulated runout of one feedwaterntInputPara id = ’’ Runout of One Feedwater Pump’’, value = ’’7.35’’,

entInputPara> of 7.35 MPaG the pump <EventInputPara id = ’’ Runout of One5’’, unit = ’’%’’> runout flow</EventInputPara> is 75% of rated feedwater flow.

t of One Feedwater Pump’’ index = ’’2’’>Feedwater controller starts to reduce ther feedwater pump </EventTransient>. . .

ging tool.

Page 8: Validation test case generation based on safety analysis ontology

Table 2A knowledge table for plant parameters.

Plantparameter

Legal lowbound

Legal upperbound

Unit Hazardous values(if any)

Power level 0 100 %Coolant

inventory0.57 1.08 m

Feed waterflow

2122 2179 kg/s

Table 3Data extracted from tag <IPandIC> at the initial stage.

Tag Tag value Attribute Attribute value

Design pressure 7.35 MPaG From LungmenFeedwaterflowRate 130% From Lungmen

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 53

15 of the Preliminary Safety Analysis Report (PSAR) for the Lung-Men power plant (Taiwan Power Company, 2005). Two of the eightmajor events mentioned in this PSAR were analyzed: Loss of Feed-water Heating (LFWH) and Feedwater Controller Failure/MaximumDemand (FWCF/MD). Fig. 12 and Table 1 show parts of the marked-up contents. Fig. 12 shows that the tag <PlantPara> is used alongwith attributes for the legal ranges of concerned parameters. Theinitial settings for the variables ‘‘FeedwaterFlowRate’’ and ‘‘Design-Pressure’’ are also marked up using the tag <IPandIC>. Numericalfigures in these marked up portions are extracted to form testcases. Table 1 shows part of the event sequence for ‘‘Runout ofone Feedwater Pump’’ PIE described in the SAR. The stated eventsequence will be tested as a standard use case. Use of computer-ized tagging tool simplifies the tagging process. Fig. 13 showshow the tagging tool assists the markup task. Tags are shown ina tree structure, and the selected textual portion is automaticallytagged with the specified tag.

4.5. Extract data from the marked-up document

Once the SAR is marked up, numerical and textual data can beextracted from the marked-up portions for use as test case inputs.

Table 4Data extracted from the tag <PlantPara> under <IdentificationofCause> at the ab

Tag Tag value Attribute

PlantPara Coolant inventory LowLimitPlantPara Feedwater flow LowLimit

Fig. 14. Data extracted

Values used in test cases are normally related to the legal or illegalranges, values, types, and timing of the tested items. Ideally, all ofthese tested items and values can be extracted from the inputtedSAR text. For example, values and range information can be ex-tracted from the tags <PlantPara> and <IPandIC> for use as test caseinputs. Besides, legal ranges of major devices and process variablescan be stored in a knowledge table in advance (Table 2), if needed,to compensate for possible incompleteness in the tagged SAR.Tables 3 and 4 show data extracted from the marked-up documentfragments. Fig. 14 shows how the computer tool facilitates dataextraction.

4.6. Generate validation test cases

Based on the test coverage criteria defined above, the followingtypes of test cases are generated in order of increasing complexity:

(1) Single parameter tests.(2) Use case tests.(3) Abnormal condition tests.(4) Scenario tests.

normal stage.

Attribute value Attribute Attribute value

0.57 UpperLimit 1.082122 UpperLimit 2179

using our toolset.

Page 9: Validation test case generation based on safety analysis ontology

54 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

The test cases are explained below:

4.6.1. Single parameter testsFirst, tests for a single parameter can be generated to test

whether a physical device works or to confirm that a process var-iable is correctly indicated by its display. For example, if the feedwater flow rate is set to 50% on the control station, the displayshould show a feed water flow rate of 50%. These simple testsare important bases for subsequent tests.

Legal values or ranges for concerned parameters can be ex-tracted from the marked-up SAR in the previous step. In this case,test cases can be generated for each of the parameters tagged byeither the <PlantPara> tag or the <IPandIC> tag. They mark up suchparameters as a pump, power level, water level, feed water tem-perature, flow rate, design pressure, etc. Once a parameter is se-lected, test cases can be generated using a specified valuevariation; moreover, when the legal ranges are known, test casescan be further designed using equivalence partitioning approach(Beizer, 1990). Equivalence portioning is a popular test case designmethod identifying a small set of representative input variablesthat yield many output conditions as possible so as to increase testcoverage (Desikan and Ramesh, 2006). The set of input values thatgenerate a single expected output is called an equivalence parti-tion. Test cases are designed for each equivalence partition. Forexample, an input variable may have three partitions: the part inthe legal range, the part that is larger than the upper bound, andthe part that is smaller than the lower bound. All parts should betested by representative values. Moreover, the boundary values

Fig. 15. Test cases with eq

Fig. 16. Single parameter

around the upper and lower bounds, which are particularly errorprone, should also be checked.

In addition to equivalence partitioning and boundary testing,multiple test cases are recommended to test the legal range ofthe parameter. For example, ‘‘Feedwater flow rate’’ has the legalrange 0–130% and where a 10% interval is used to test the legalportion, the test cases are �10%, 0%, 10%, 20%, 30% . . .130%, 140%,as shown in Fig. 15. Fig. 16 shows how the computer tool is usedto generate single parameter test cases based on a selected tag.

Single-parameter tests can verify the correctness of a single de-vice or a single process variable. These test cases act as bases forfurther more complex tests because if a single parameter fails,the following tests do not work.

4.6.2. Use case testsNext, standard use cases can be tested to ensure the safety sys-

tem responds as expected when a certain PIE is encountered. A SARusually describes a sample operation sequence for major PIEs.These descriptions can be marked up and used as a template togenerate use case tests. In the initial and abnormal stages, theSAR describes initial settings and the postulated initial event attime 0; at the transient stage, the SAR describes the sequences ofsystem operations at the subsequent time units; the evaluationand event influence portions may be needed to judge the conse-quences of the use case.

For example, the description for ‘‘Runout of one FeedwaterPump’’ event is marked up in Fig. 12 and in Table 1 above. Theevent description can be extracted automatically to form the usecase test given in Fig. 17. The initial settings indicate that design

uivalence partitioning.

test case generation.

Page 10: Validation test case generation based on safety analysis ontology

Fig. 17. Use case tests generated by the tool.

Table 5Abnormal condition test in ‘‘runout of one feed water pump’’ case.

Time Test procedure Expected output

0 (1) Design pressure 7.35 MPaG Feedwater flow increase(2) Pump 75% feedwater flow(3) Runout of the pump

�0.1 (4) Feedwater controller FAIL TOreducethe feedwater flow from theother pump

The other pump feedwater flowkeeps going..

16.6 (5) Vessel water level reaches itspeak value

Water level increases tooverflow. . .

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 55

pressure is 7.35 MPaG and pump feedwater flow is 75%. The abnor-mality is ‘‘runout of one pump’’. The controller then reduces waterflow in the other pump so that the transient abnormality of waterlevel returns to normal. In this case, no radiological consequencesare observed. The advantage of using such a use case test is to ver-ify the scenario description in the SAR.

Fig. 18. Scenario test

4.6.3. Abnormal condition testsThe use case tests stated above can be modified by adding

hardware or software failures to test system robustness. Abnor-mal conditions include hardware and software errors such as de-vice failures, sensor errors, and control system errors. This testingmethod resembles fault injection (Voas, 1997), in which (hard-ware or software) faults are injected to the system to test itsrobustness. For example, possible tests in the above use case in-clude ‘‘Feedwater pump Actuator failure’’ and ‘‘Feedwater control-ler failures’’. The use case test in Fig. 17 was modified by adding acontroller failure as shown in Table 5. The expected outputs ofthese abnormal events may be unknown until the test is per-formed. These abnormal test cases may compensate for the po-tential insufficiency in the original safety analysis. These fault-injection style tests are similar to so called ‘‘break tests’’ (Whit-taker, 2002) because they are intended to break the system. Theymay reveal the unknown consequences and impacts of hardware/software failures on the system. Safety analysis analyzes postu-lated hazardous scenarios statically; tests of abnormal events

case generation.

Page 11: Validation test case generation based on safety analysis ontology

Table 6Sample of Scenario test cases.

Input Expected output

Initial Initial Abnormal Transient Evaluation InfluencePower level (%) Coolant inventory (m) Run out flow (%)

�11 0.52 �13 Error Error Error�11 0.57 0 Error Error Error0 0.57 011 0.57 1311 0.62 13. . .. . . .. . .. . . .. . .. . .. . ..

56 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

reveal hazardous situations dynamically and examine how thesystem reacts.

4.6.4. Scenario testsUse case tests are derived from the standard event sequences

described in the SAR. However, non-standard events should alsobe tested; they can be formed by varying values of selected param-eters in the standard use case tests. Thus, numerous scenario testscan be generated automatically, as shown in Fig. 18. Basically, alarge amount of scenario tests are derived by combining resultsfrom such techniques as equivalence partitioning, boundary test-ing, and multiple tests within the legal range.

To demonstrate the proposed method, examples in our casestudy are described below. In this case study, the parametersmarked by the tags <IPandIC>, <EventAssume>, <PlantPara>, and<EventInputPara> were selected. For example, the steps to gener-ate various scenario tests for a ‘‘feedwater controller failure atmaximum demand’’ event are carried out as follows:

Step 1: Select parameters ‘‘power level’’, ‘‘coolant inventory’’,and ‘‘runout flow’’.

Step 2: Find the legal ranges of these parameters in the datatables whose information are extracted from such marked-up text as:

< PlantPara LowLimit ¼ 0:57;UpperLimit ¼ 1:08;Unit ¼m >

coolant inventory < =PlantPara > :

Thus, we get the following ranges:Coolant inventory: 0.57–1.08 mPower level: 0–110%Run out flow rate: 0–130%

Step 3: Assume the variant x = 10% of the total range span.Thus, we get:

Power level: (110% � 0%) � 10% = 11%Coolant inventory: (1.08 m � 0.57 m) � 10% = 0.05 mRun out flow rate (130% � 0%) � 10% = 13%Boundary values and illegal values should be tested. Thus,

the following variants of each parameter are generated:Power level (13 cases):�11%, 0% (lower bound), 11%, 22%, 33%, 44%, . . . , 99%

110% (upper bound), 121%Coolant inventory (14 cases):

0.52 m, 0.57 m (lower bound), 0.62 m, 0.67 m, 0.72 m,0.77 m, 0.82 m, 0.87 m,

0.92 m, 0.97 m, 1.02 m, 1.07 m, 1.08 m (upperbound), 1.13 mRun out flow rate: (13 cases):�13%, 0% (lower bound), 13%, 26%, 39%, 52%, . . . , 117%,130% (upper bound), 143%

Step 4: Combine the above variant data to form scenario testcase inputs.

Thus, a 2366 (i.e., 13 � 14 � 13) scenario tests may be generated(Table 6). Different variant amount x (Step 3 of Fig. 18) can be cho-

sen so as to control the number of test cases generated. Fig. 19shows how our toolset is used for generating scenario test cases.These scenario tests achieve high test coverage efficiently becausethey use equivalence partitioning approach to test both legal andillegal portions. Moreover, they also provide comprehensive test-ing for the legal ranges along with testing of the boundary values,which are particularly fault-prone. Compared to random tests,which are commonly performed in the nuclear industry, the pro-posed scenario test method is more systematic and morecomprehensive.

5. Test coverage

The test cases generated above can be evaluated in a simulatorenvironment. Generic plant simulators such as PCTran (MST Com-pany, 2007) may be used for the purpose. For performing these testcases, a dedicated simulator for the plant being analyzed is prefer-able. We expect to use a simulator environment to verify the effec-tiveness of the proposed method in future works.

Because our approach is based on user needs (a SAR), the testcases generated are black box tests. The conventional test coveragecriteria such as path coverage, state coverage, and transition cover-age are inapplicable since system models and program code are notused.

Instead of testing normal system behavior, the generated casesare used to test transient and accident events addressed in the SAR.The proposed use case tests can be generated for all PIEs that haveevent sequences descriptions in the SAR. Moreover, variations ofparameter values in event sequence are also suggested above;equivalence partitioning technique is used. Thus, test cases derivedfrom the approach fully satisfy the proposed test coverage criteria,namely:

(1) Single parameter coverage: All single parameter mentioned inthe SAR are tested individually.

(2) Use case coverage: All transient or accident sequences men-tioned in the SAR are tested.

(3) Abnormal condition coverage: Potential failures of hardwareand software in the above use cases are tested.

(4) Scenario coverage: All combinations of partitions of inputparameters are tested using equivalence partitioning andboundary testing. Multiple tests are also generated withinthe legal range.

In summary, the test cases meet the above test coverage crite-ria. Future works should evaluate the effectiveness of these testcases in actual case studies.

6. Conclusion

This study introduces a new technique for generating validationtest cases according to specific user needs. A domain-specific

Integrity OKIntegrity OK Decrease temperatureIntegrity OK Decrease temperature

. . .. . .. . ... . . .. . ... . . .. . ...

Page 12: Validation test case generation based on safety analysis ontology

Fig. 19. Scenario test cases generation.

C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58 57

ontology is designed and represented by a markup language, whichis used to mark up a SAR, which addresses the safety needs of theuser. Ontology-based test coverage criteria are also defined. Testcases that meet coverage criteria are then generated systematicallyusing the marked-up SAR. A computerized toolset is implementedto assist the mark-up and test case generation processes. The nov-elty of this technique is its systematic rather than ad hoc test casegeneration from a SAR. Moreover, unlike automatic test case gen-eration in other domains, this technique is based on user needs in-stead of required specifications. The technique is text-based ratherthan model-based, and it tests system behavior under abnormaltriggering events rather than under normal operation. Major fea-tures and their associated advantages of this approach are listedbelow:

� A domain-specific ontology for the Safety Analysis Report wasproposed. The ontology facilitates communication betweenthe regulator and the license applicant. The ontology can alsosupport validation test case generation.� Current practice in validation test case generation for nuclear

digital systems is mainly ad hoc. The proposed technique sys-tematically derives validation test cases from a SAR.� Extracting test data from a domain-specific ontology facilitates

accurate extraction of domain-specific information.� This technique systematically generates numerous test cases

with equivalence partitioning. Compared to random tests, thistechnique effectively covers all input partitions.� The technique is based on user documents that are generated

before requirement specifications rather than on behavior mod-els or requirement specifications. Therefore, the test cases are abetter reflection of the user perspective.

� Instead of testing system behavior under a normal condition,this technique tests the safety system behavior under an initialabnormal event. It dynamically verifies accidents described in aSAR.� Hardware/software failures are also considered. The resulting

test cases can augment the original safety analysis.� The proposed approach to generating document-based test

cases is sufficiently generic for application in different safety–critical domains.

Acknowledgments

This work was supported in part by the National Science Coun-cil of Taiwan, ROC, under the Grant No. NSC100–2221-E-155–059.Dr. Swu Yih is appreciated for his valuable suggestions.

References

Bai, X., et al. 2008. Ontology-based test modeling and partition testing of webservices. In: Proceedings of IEEE International Conference on Web Services, pp.465–472.

Beizer, B., 1990. Software Testing Techniques, second ed. Van Nostrand RheinholdCompany Limited.

W3C, 2008. Extensible Markup Language (XML) 1.0. <http://www.w3.org/TR/REC-xml/>. (retrieved 19.07.11).

Desikan, S., Ramesh, G., 2006. Software Testing: Principles and Practice. PearsonEducation, India.

Dowson, M., 1997. The Ariane 5 software failure. Software Engineering Notes 22 (2),84.

Falatko, F., 1991. Report Issued On Scud Missile Attack. United States Department ofDefence News, 5 June.

Fukumoto, A. et al., 1998. A verification and validation method and its application todigital safety systems in ABWR nuclear power plants. Nuclear Engineering andDesign 183, 117–132.

Page 13: Validation test case generation based on safety analysis ontology

58 C.-F. Fan, W.-S. Wang / Annals of Nuclear Energy 45 (2012) 46–58

Fan, C., Yih, S., 1999. Safety Markup Language: Concept and Application. In:Proceedings of Lecture Notes in Computer Science 1698, SafeComp ’99, Toulous,pp. 177–186.

Goldfarb, C.F., 1990. The SGML Handbook. Oxford University Press.Hofwebwer, T., 2004. Logic and Ontology. <http://plato.stanford.edu/entries/logic-

ontology/>. (retrieved 03.06.11).Hopcroft, J., Motwani, R., Ullman, J., 2000. Introduction to Automata Theory,

Languages, and Computation. Addison-Wesley.Ion, P., Miner, R., 1999. Mathematical Markup Language, W3C working Draft.

<http://www.w3.org/pub/WWW/TR/WD-math>. (retrieved 03.06.11).Jézéquel, J., Meyer, B., 1997. Design by contract: the lessons of Ariane. IEEE,

Computer Magazine 30 (2), 129–130.Lions, J., 1996. Ariane 501 Failure: Report by the Inquiry Board. European Space

Agency. <http://esamultimedia.esa.int/docs/esa-x-1819eng.pdf>. (retrieved03.06.11).

Murray-Rust, P., Rzepa, H., 1995. Chemical Markup Language (CML). <http://www.xml-cml.org/>. (retrieved 03.06.11).

Nasser, V.H., 2009. Ontology-based Unit Test Generation. Master’s Thesis inComputer Science. University of New Brunswick.

Offutt, J. et al., 2003. Generating test data from state-based specifications. TheJournal of Software Testing. Verification and Reliability 13 (1), 25–53.

Offutt, J., Abdurazik, A., 2000. Using UML collaboration diagrams for static checkingand test generation. In: Proceedings of 3rd International Conference On UML,York, UK.

Prasanna, M. et al., 2005. A survey on automatic test case generation. AcademicOpen Internet Journal 15, 89–90.

MST Company, 2007. PCTran, PC-based Nuclear Power Plant Simulator. <http://www.microsimtech.com/abwr/>. (retrieved 19.07.11).

Protégé, 2011. <http://protege.stanford.edu/>. (retrieved 03.06.11).Taiwan Power Company, 2005. Preliminary Safety Report for Lung-men of Taiwan

Power Company, U10 version.NRC, NUREG 0800, 2007. Standard Review Plan for the Review of Safety Analysis

Reports for Nuclear Power Plants. <http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr0800>. (retrieved 19.07.11).

NRC, Regulatory Guide 1.070 (Revision 3), 1978. Standard Format and Content ofSafety Analysis Reports for Nuclear Power Plants. <http://adamswebsearch2.nrc.gov/idmws/ViewDocByAccession.asp?AccessionNumber=ML011340122>. (retrieved19.07.11).

NRC, 1997. Software Test Documentation for Digital Computer Software Used inSafety Systems of Nuclear Power Plants. <http://www.nrc.gov/about-nrc/regulatory/research/digital/regs-guidance.html>. (retrieved 19.07.11).

Voas, J., 1997. Fault injection for the masses. Computer 30, 129–130.Whittaker, J., 2002. How to Break Software: A Practical Guide to Testing. Addison

Wesley.