21

Click here to load reader

A knowledge-based test plan generator for incremental unit and integration software testing

Embed Size (px)

Citation preview

Page 1: A knowledge-based test plan generator for incremental unit and integration software testing

SOFTWARE TESTING, VERIFICATION AND RELIABILITYSoftw. Test. Verif. Reliab.8, 191–211 (1998)

A Knowledge-Based Test Plan Generatorfor Incremental Unit and IntegrationSoftware Testing

ZORICA MIHAJLOVIC*1 AND DUSAN VELASEVIC2

1Computer Systems Design Laboratory, VINCˇ A Institute of Nuclear Sciences, P.O. Box 522, 11001 Belgrade,Yugoslavia2Faculty of Electrical Engineering, University of Belgrade, Bulevar Revolucije 73, P.O. Box 816, 11001Belgrade, Yugoslavia

SUMMARY

Unit and integration testing are two expected levels of testing for conventionally designedprograms. This article presents a system for the automatic generation of test plans forincremental unit and integration testing. Starting from a global task of planning integrationtesting, the system generates a detailed test plan automatically for a given program on thebasis of its decomposition as a software structure. The system is implemented as a frame-based system. It supports four standard ways of assembling units during integration, and twoways of testing according to parallelism. It also takes into account the units that require moretesting than others. The system greatly improves the productivity of planning integration testingin comparison with the current practice of producing test plans manually. Experiments haveshown a minimum productivity increase of 55 times. 1998 John Wiley & Sons, Ltd.

KEY WORDS software testing tools; incremental unit and integration testing; planning testing; knowledge-basedsystems; integration strategies; testing process models

1. INTRODUCTIONThis article presents a knowledge-based System for Automatic Generation of Test Plansfor incremental unit and Integration Testing (SAGTPIT). Unit and integration testing aretwo levels of software testing that are always performed, regardless of the kind of softwarethat is developed through the traditional procedural programming paradigm: an experimentalsystem or a software product (Myers, 1979). SAGTPIT improves the planning of incremen-tal unit and integration testing (‘integration testing’ in the rest of the text) by automation.It increases the productivity of integration testing planning in comparison with the currentpractice either of producing test plans manually or of using standard project managementtools that do not support automation. Experiments have shown a productivity increaseranging from 55 to 321 times. The system is a new approach to automation of the

* Correspondence to: Ms Zorica Mihajlovic´, Computer Systems Design Laboratory, VINCˇ A Institute of NuclearSciences, P.O. Box 522, 11001 Belgrade, Yugoslavia. E-mail: zoricaKrt270.vin.bg.ac.yu

CCC 0960–0833/98/040191–21$17.50 Received 28 May 1998 1998 John Wiley & Sons, Ltd. Revised 28 October 1998

Accepted 18 November 1998

Page 2: A knowledge-based test plan generator for incremental unit and integration software testing

192 Z. MIHAJLOVIC AND D. VELASEVIC

software testing process because it focuses attention upon planning integration testing.Also it is a new application of knowledge-based ideas to software testing. Since SAGTPIThelps in planning integration testing, it can be considered as a software project managementsystem particularly devoted to a special category of software project activities: integrationtesting activities.

The importance of planning integration testing is obvious. To manage integration testingsuccessfully, it is necessary to plan this activity carefully in advance, and produce a testplan for any particular software. The test plan serves as an essential document forperforming the further steps of integration testing: test design, test implementation andtest execution (Myers, 1979; Pressman, 1997; Gelperin and Hetzel, 1988). All the testingprocess models currently in practice, e.g. the destruction model, the evaluation model andthe prevention model, stress the importance of planning (Gelperin and Hetzel, 1988). Inaddition, the appropriate standards for software testing describe test plans as basic testdocuments during software development (IEEE, 1983, 1986a, 1986b).

SAGTPIT is based upon the decomposition of a program given as a software structure.A software engineer sets a global task of planning integration testing by entering intoSAGTPIT a starting activity of the integration testing type, including its required inputand output objects produced. The engineer can also define other attributes of the startingactivity, e.g. duration and cost. Among the input objects, the software structure is the onethat serves as the relevant object for automatic test plan generation. Based upon thisstructure and the whole starting activity, the system decomposes the starting input andoutput objects and the starting activity into their components, fills the attributes of thecomponent activities according to those given in the starting one, and defines the orderof the component activities on the engineer’s request. In this way, the system automaticallygenerates a detailed test plan for the program specified in the starting activity.

Section 2 describes previous work on subjects connected with SAGTPIT. Section 3describes the knowledge-based aspects of SAGTPIT. The basic approach to automatic testplan generation implemented in SAGTPIT is presented in Section 4. Section 5 elaboratesin more detail on the way of generating a test plan automatically and presents an example.Section 6 describes an evaluation of SAGTPIT and Section 7 presents concluding remarks.

2. PREVIOUS WORKKnowledge-based and more general artificial intelligence (AI) approaches have previouslybeen applied to software testing. Chapman (1982) described a program testing assistantwhich aids a programmer in testing during incremental LISP program development.Hoffman and Strooper (1991) proposed tools and techniques for writing scripts in Prologthat automatically test modules, e.g. a stack module, implemented in C. Deasonet al.(1991) presented a rule-based test data generator for ADA programs that produces a largeamount of test data. Wildet al. (1992) proposed the process of refinement of test casesgenerated according to a particular design method and its testing criterion. Andersonetal. (1995) described the idea of using a neural network for pruning a large number oftest cases produced by a special test generation tool. Howeet al. (1997) proposed anapproach for automating test case generation for software with command language inter-faces by using an AI planner.

Many tools exist that automate some aspects of the testing process. Some of them, likeTest Inc from the University of Pittsburgh (Harrold and Soffa, 1990), are the results of

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 3: A knowledge-based test plan generator for incremental unit and integration software testing

193KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

research projects. Others, like the Integrated Test Tool System from Software Research(Miller, 1990), are commercial products. They can all be classified into: automatic testdrivers, test case generators, program instrumentors with tracers and profilers, comparatorsand librarians (Myers, 1979; Fuggetta, 1993; Pressman, 1997). For example: T tool(Poston, 1990) and TESTGRAPH (Velasˇevic et al., 1997) are test case generators, Exdiff(Miller, 1990) is a comparator, and Test Inc (Harrold and Soffa, 1990) is an example ofa tool that has a librarian function.

Intelligent project management, including project planning, has been the subject of someimportant research. Sathiet al. (1985, 1986) presented an intelligent project managementsystem called Callisto. Mi and Scacchi (1990) modelled and simulated software engineeringprocesses with a knowledge-based environment called Articulator, which can also be usedin software project management. Bimson and Boehm Burris (1987, 1989) described anassistant in software project definition called Watson. Watson enables a project managerto represent a project’s plan as a hierarchy of task networks. It maintains the integrity ofthe plan’s data during project definition. The idea of automatic generation of softwareproject plans was demonstrated on documentation activities in Watson. SAGTPIT appliesthis idea to integration testing.

Despite the numerous differences among the aforementioned AI approaches to softwaretesting and the tools that automate some aspects of the testing process, they all have onefactor in common: they deal with test design, implementation and execution. However,none of these previous approaches considers the planning of integration testing as thisarticle does. On the other hand, current intelligent software project management systemstreat the planning of software project activities as a general task. Planning activities ofsome special categories, e.g. integration testing activities, has been studied very little inthese systems in contrast to the considerations here. Producing a detailed plan startingfrom a global task is not the same for all types of software project activities. A goodexample which highlights the difference is planning documentation activities (Bimson andBoehm Burris, 1989); this is substantially different from planning integration testing. Asa consequence, planning must be treated separately for different types of activities. Sinceintelligent software project management systems do not focus on particular types ofactivities, they are not appropriate for automatic generation of test plans for integration test-ing.

Standard project management tools, like Microsoft Project, do not support automaticgeneration of test plans for integration testing. They cannot create a detailed test planstarting from a given global task. To use them, a software engineer must first prepare adetailed test plan and then enter it into a tool. Standard project management tools canserve as a structured and electronic form of storing a plan.

3. SAGTPIT AS A KNOWLEDGE-BASED SYSTEMSAGTPIT is a knowledge-based system. The problem that SAGTPIT addresses— automaticgeneration of test plans for integration testing—is a good candidate for a knowledge-based approach since (Durkin, 1994):

I it needs expert knowledge;I it is well-focused;I it uses symbolic knowledge;

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 4: A knowledge-based test plan generator for incremental unit and integration software testing

194 Z. MIHAJLOVIC AND D. VELASEVIC

I it is reasonably complex;I it uses incomplete information (e.g. a plan should be generated in the case when

only some of the starting activity’s attributes are defined);I it requires recommendation more than an exact answer (sometimes there is a need

to refine the plan generated by SAGTPIT further);I successful systems for similar problems already exist (the Watson system can auto-

matically generate plans for documentation activities in software projects—Bimsonand Boehm Burris, 1989); and

I it has some obvious value (SAGTPIT improves the productivity of generatingtest plans).

SAGTPIT is a frame-based system. It uses frames to represent domain-specific knowledgeabout planning integration testing. Planning integration testing deals with testing activities,objects referenced in activities and their relationships. Since general objects (activities andtheir specific objects) are at the centre of the problem, the problem is appropriate for aframe-based approach (Durkin, 1994).

SAGTPIT’s structure is presented in Figure 1. It includes the three standard componentsof knowledge-based systems: knowledge base, inference engine and user interface (Rolston,1988). The knowledge base component has two sections:invariable sectionand particularproblem section. Knowledge about the domain of planning integration testing is coded in

Figure 1. SAGTPIT’s structure.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 5: A knowledge-based test plan generator for incremental unit and integration software testing

195KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

the invariable section. This knowledge includes the main concepts of the domain, theirstructural models, functions and relationships. The reasoning processes established withinSAGTPIT are reliant on the main concepts. Theparticular problem sectioncontains thecurrent plan to be considered in SAGTPIT, e.g. a starting activity that should bedecomposed and elaborated upon during automatic plan generation.

Two main concepts in planning software projects generally, and integration testing inparticular, are: activities and specific objects of activities (Bimson and Boehm Burris,1989; Sathiet al., 1985). Activities exist and last in time and transform input to output.They have predecessors and successors and thus form activity networks. Specific objectsof activities can have different roles in activities. They can be input and output objects,or resources needed for an activity to be finished.

SAGTPIT’s conceptual model is an adaptation of that in the Watson system (Bimsonand Boehm Burris, 1989, 1987). Watson was selected because it is directly dedicated tosoftware project management and it resolves all the most important problems in globallycontrolled management. Watson’s conceptual model is further extended and elaboratedupon to enable automatic test plan generation for integration testing in SAGTPIT.

New information is inferred through inheritance and demons in traditional frame-basedsystems (Fikes and Kehler, 1985; Rolston, 1988) (Figure 1). Many modern developmenttools for frame-based systems enhance the capabilities of such systems by adopting thefollowing features of object-oriented programming: methods and message passing (Durkin,1994). Kappa-PC, the tool in which SAGTPIT is implemented, is among them. Kappa-PC is a hybrid expert system tool that supports object-oriented and rule-based programming(Lydiard, 1990). SAGTPIT uses the Kappa objects as the enhanced frames in its implemen-tation and thus it infers new information through inheritance, demons and methods(Figure 1).

Besides SAGTPIT’s main purpose of generating test plans for integration testingautomatically, SAGTPIT has the additional capability of allowing an engineer to define acomplete test plan and to enter the plan into SAGTPIT. The presentation of a test planwithin SAGTPIT is the same for all plans whether generated by SAGTPIT or defined byan engineer. A test plan for integration testing, like any other software project plan, ispresented as an activity hierarchy with the required number of decomposition levels(Figure 2). At the root of the hierarchy there is a pre-defined activity networkLevel1Nthat can be further decomposed into several activities. The number of levels in thehierarchy depends upon the software project being considered. Attributes like cost, duration,effort (in person-months or person-days) and required resources, describe activities (Figure3). Appropriate data are filled into activity attributes of the plan’s hierarchy according tothe project. The plan presented in such a way forms theparticular problem section(Figure 1).

SAGTPIT allows an engineer to define activity plans, including a starting activity forautomatic plan generation, with different numbers of details, in other words, with differentinternal features of activities. Attributes and functionalities represent different levels ofabstraction of the activity features. Attributes represent the elementary level of filling datafor activities. Functionalities represent the elementary level of defining the number ofactivity details. A functionality can encompass one or more attributes. This is shown inFigure 3. The upper part of Figure 3 presents attributes and functionalities of generalsoftware project activities, while the lower part shows attributes and functionalities specificfor integration testing activities only.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 6: A knowledge-based test plan generator for incremental unit and integration software testing

196 Z. MIHAJLOVIC AND D. VELASEVIC

Figure 2. An example of the activity hierarchy.

Figure 3. The activity attributes and functionalities in SAGTPIT.

Two main reasoning processes in SAGTPIT (Figure 1) are:

I maintaining data integrity in the plan’s activity hierarchy for a given plan; andI automatic generation of test plans for integration testing.

The name of the system, SAGTPIT, comes from the second reasoning process.SAGTPIT’s capability of maintaining data integrity in activity hierarchies is based upon

the general belief that data in elementary activities of a project’s plan are more precise

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 7: A knowledge-based test plan generator for incremental unit and integration software testing

197KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

than data of a top activity representing the plan for the whole project. Taking into accountthis belief, the summary data of the whole project can be aggregated from the detaileddata of the project’s elementary activities. For a plan defined by a software engineer orgenerated by SAGTPIT, the engineer calls SAGTPIT to check data integrity in the plan’shierarchy. SAGTPIT checks data by repeating two elementary steps across the hierarchy:horizontal checking across the component activities of a network and vertical checkingbetween the network and its superordinate activity (Figure 2). For example, the costattribute of the network is computed as the sum of the cost attributes of the componentactivities and then compared for equality with the cost attribute of the superordinateactivity (Mihajlovic and Velasˇevic, 1997). SAGTPIT is capable of finding any inconsistencyin the plan and of proposing an appropriate correction or aggregation. If the engineeraccepts all the system’s proposals, SAGTPIT ensures data consistency in the plan’shierarchy after it has finished its reasoning. Although SAGTPIT reasons about a plan, ittolerates incomplete and inconsistent data for the plan’s hierarchy. When it comes acrossinconsistent data, it excludes these data from further consideration and continues to checkthe remainder. In this way, SAGTPIT gradually degrades its capability of maintainingdata integrity in hierarchies.

The second of SAGTPIT’s reasoning processes is explained in the subsequent sections.

4. THE BASIC APPROACH TO AUTOMATIC TEST PLANGENERATIONAutomatic generation of plans for software projects can be seen as a process of automati-cally producing a detailed activity plan for a global task of planning, defined and specifiedby a software engineer in the form of a starting activity. It is based upon relationshipsbetween activities and specific objects of activities. Among various relationships, thefollowing are of special importance: objects as required inputs of, and outputs producedby activities. Generally, activities and objects are both decomposable and the activitydecomposition of a given task can be derived from the structure of one of the task’sinput or output objects: the one which is relevant for the specific type of software projectactivity. For an automatic plan generator, it is necessary to explore the correspondencebetween the activity and object decomposition (object structure) for specific types ofactivities and code it into the generator’s knowledge base. Then the generator is able toproduce a detailed plan automatically, starting from an activity defined by an engineer.This definition includes all the activity’s required input and produced output objects. Theplan generation is based upon the decomposition of the activity’s relevant object (Figure4). The relevant object for SAGTPIT’s work is a software structure brought to a givenstarting activity as the input object. Software structures are usually used during softwaredevelopment to represent software designs and programs themselves (Pressman, 1997).SAGTPIT maps the whole starting software structure into one decomposition level(Figure 4).

Once the software engineer has defined a starting integration testing activity with allits input and output objects and the other activity attributes (Figure 4), SAGTPIT doesthe rest since it:

I elaborates and decomposes the starting activity into its component activities thatcorrespond to particular units of the given software;

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 8: A knowledge-based test plan generator for incremental unit and integration software testing

198 Z. MIHAJLOVIC AND D. VELASEVIC

Figure 4. Automatic generation of plans for software project activities.

I decomposes the starting input and output objects into their parts related to compo-nent activities;

I distributes the object parts across component activities; andI defines the other attributes of component activities.

To perform the first two steps, SAGTPIT forms a decomposition list which is an orderedlist of all the units of a program’s structure given in a starting activity. It depends uponthe structure itself and a special parameter of testing: the selected way of ordering units(Section 5). The starting activity and its input and output objects are then decomposedaccording to the decomposition list.

4.1. SAGTPIT’s classification of activities and specific objectsThe two main concepts within SAGTPIT, activities and specific objects of activities,

are implemented as enhanced frames—the KAPPA objects (Section 3). These frames arefurther specialized into subclasses. The frame class of activities has four subclasses, whichare presented in the next paragraph. The frame class of specific objects has two subclasses:specific resource objects and specific input/output objects. The further specialization ofthe second subclass is shown in Figure 5.

If software testing is considered to be analogous to software development, integrationtesting activities can be categorized into three classes: design, implementation and executionactivities (Gelperin and Hetzel, 1988). To execute tests for a particular unit, these testsmust first be designed and implemented. Design and implementation activities are devotedto producing tests for software units. Execution activities expose units to produced tests.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 9: A knowledge-based test plan generator for incremental unit and integration software testing

199KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

Figure 5. The classification of the specific input/output objects.

The fourth class of integration testing activities is also introduced: all-encompassingactivities. An all-encompassing activity includes the test design, implementation andexecution of the same program, gathered all together. The whole task of integration testingof a given software system can be considered as an all-encompassing activity.

Specific input/output objects of activities within SAGTPIT, shown in Figure 5, can bedecomposed in a direct and indirect manner (DCompositeand IComposite). The objectsfrom IComposite have a special attributeHasStructure that points to the top of theappropriate, directly decomposed structure (DCStr). The classes ofICompositeand DCStrare introduced to describe software structures that are relevant objects in integrationtesting.ICompositeMandICompositeCpresent software designs and programs, respectively.The specification class of objects (DCSpecD) has six subclasses: software design (SD),module (unit) specification (MS), code (Co), test design (TD), test (Te) and test report(TR). Objects in the classDCSpecDCorepresent programs themselves. Objects fromDCSpecDTD, DCSpecDTe and DCSpecDTR are output objects of the test design,implementation and execution activities, respectively. Objects fromDCSpecDSDandDCSpecDMSpresent results of the design phase during software development. The classDCSpecDMSis shown separately because of its importance in integration testing: testdesign for a unit can be performed starting from the unit’s code or its module specification.Objects from all the classes in Figure 5, except those fromDCStr, are directly referencedin integration testing activities. Objects fromDCStr are indirectly called by an objectfrom ICompositeCor ICompositeM.

A starting activity must meet some preconditions before SAGTPIT begins its work ongenerating the plan automatically. These preconditions are independent of the activityclassification. The starting activity must have:

I the minimum set of defined attributes; andI correct input and output objects.

The minimum set includes the categorization attributes and those for required input andproduced output objects. Correct input and output objects mean: correct classes to which

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 10: A knowledge-based test plan generator for incremental unit and integration software testing

200 Z. MIHAJLOVIC AND D. VELASEVIC

these objects belong, the correct number of objects and their correct decomposition.Correct classes to which input and output objects belong and the correct number of theseobjects for the four classes of integration testing activities are shown in Table I.

SAGTPIT generates the plan with two levels of decomposition starting from an all-encompassing activity. The first decomposition level is the same for all all-encompassingactivities. It contains a pre-defined sequence of three activities: test design, implementationand execution activity. SAGTPIT creates two new objects in this case: an object ofDCSpecDTDand an object ofDCSpecDTe. The first of these objects is set to be theoutput object of the design activity and the input object of the implementation activity.The second one is set to be the output object of the implementation activity and the inputobject of the execution activity. At the first level of decomposition, SAGTPIT does notuse the relevant software structure of the starting activity to produce the appropriatedecomposition list. It also does not decompose input and output objects. SAGTPIT onlydistributes input and output objects across the three activities from the pre-definedsequence. At the second decomposition level, SAGTPIT uses the relevant software structureof the starting activity to elaborate upon and decompose each activity from the firstdecomposition level, the design, implementation and execution one.

Table I. Input and output objects for different starting activities

Input objects Output objects

– one object from DCSpecDCo and/or one object fromDesign activity DCSpecDTDone object from DCSpecDMS

– [one object from DCSpecDSD]*– one object from ICompositeC or5

ICompositeM

– one object from DCSpecDCo one object fromImplementation activity DCSpecDTe– one object [or more objects]

from DCSpecDTD– one object from ICompositeC5

or ICompositeM

– one object from DCSpecDCo one object fromExecution activity DCSpecDTR– one object [or more objects]

from DCSpecDTe– one object from ICompositeC5

or ICompositeM

– one object from DCSpecDCo one object fromAll-encompassingDCSpecDTR– [one object from DCSpecDMS]activity

– [one object from DCSpecDSD]*– one object from ICompositeC5 or ICompositeM

[ ] optional.* Valid if an object from DCSpecDMS is included.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 11: A knowledge-based test plan generator for incremental unit and integration software testing

201KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

5. GENERATING TEST PLANS AUTOMATICALLY

SAGTPIT supports four ways of ordering units in a decomposition list: bottom-up breadth-first, bottom-up depth-first, top-down breadth-first and top-down depth-first integration. Italso supports two ways of testing according to parallelism: serial and parallel testing. Forthe current session with SAGTPIT, a software engineer specifies these parameters: theway of ordering units and the way of testing according to parallelism. For the same startingactivity and its structure, SAGTPIT generates different detailed test plans, dependent uponthe testing parameters selected.

The four ways of ordering units within SAGTPIT correspond to two main incrementalintegration strategies, the bottom-up and top-down strategies, and their two variants,breadth-first and depth-first testing (Myers, 1979; Pressman, 1997). The bottom-up strategycan easily be implemented as the opposite to the top-down strategy that contributes tothe early production of a working version of a program (Myers, 1979). SAGTPIT takesaccount of input, output and critical units of a starting structure while it forms thestructure’s decomposition list in order to match the proposal made by Myers (1979).Input, output and critical units, and the paths to which they belong, are entered in thedecomposition list as early as possible to enable their early testing. For the purpose oftreating these units, the attributeDescriptors is added to the structure’s units (Figure 5).The possible values for this attribute are:Input, Output and Critical. An engineer fillsDescriptorswhen defining a program’s structure.

SAGTPIT can parallelize the decomposition level of a detailed plan formed on thebasis of a starting activity’s software structure. In the parallel way of testing, SAGTPITmakes a serial sequence of stages, where each stage includes one or more componentactivities that may be performed in parallel. The number of activities within a stage islimited to the numberN, defined by a software engineer. The engineer definesN withina range computed by SAGTPIT and based upon a proposal also made by SAGTPIT. Therange computed by SAGTPIT starts at 1 and ends with the width of the software structure.The proposal made by SAGTPIT exploits the maximum parallelism allowed by thespecified resources in the attributeRequires Resources, or is equal to the width of thestructure if the attributeRequires Resourcesis not included in the starting activity.

The objects ofDCStr that correspond to the units of a starting structure have theadditional attributeImportance, introduced to describe how hard it is to test each unit(Figure 5). Some units have more errors and thus require more work on testing thanothers. For example, very small units, very large units and units with low testability aregood candidates for additional testing (Withrow, 1990; Voaset al., 1991). A softwareengineer specifies the attributeImportance when defining the starting structure. Thisattribute can be a number starting from 1. The number 1 corresponds to a unit thatrequires a standard level of testing. If the number is greater than 1, the unit requiresmore testing. The attributeImportanceinfluences the activity attributes of a detailed testplan connected with the amount of work on testing:Duration, Effort, Cost and ErrorNumber. An example illustration of this influence is given for the cost attribute. The costattribute for a component activity in the detailed plan is proportional to the attributeImportance of the unit to which the activity corresponds. The component activitiesconnected to the units that require the standard level of testing have one cost value, whilethe component activities connected to the units that require more testing have greatercost value.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 12: A knowledge-based test plan generator for incremental unit and integration software testing

202 Z. MIHAJLOVIC AND D. VELASEVIC

The activity attributes specific for integration testing are:Design Methods, CompletionCriteria, Error Number and Time Unit Number(Figure 3). They all treat the problem ofcompletion criteria for integration testing (Myers, 1979). The attributeDesign Methodsspecifies the completion of testing in an indirect way and is relevant for designing tests.The other attributes specify the completion in a direct way and are relevant for testexecution. A software engineer specifies the attributeDesign Methodsof a starting activityby selecting the methods from a list of possible ones proposed by SAGTPIT. Test designmethods which SAGTPIT proposes are standard black-box and white-box methods (Myers,1979; Pressman, 1997). At least one black-box test design method must be specified whendefining the starting activity.

SAGTPIT generates a detailed test plan for a given global task of planning integrationtesting and its starting activity automatically. It defines all the attributes of the plan’scomponent activities in such a way that it provides complete data consistency betweenthe starting activity and the generated component activities. Although each activity attributehas a distinct algorithm for setting the attribute, all the algorithms are designed to meetthe requirement of consistency.

5.1 An example of SAGTPIT’s workAn example of how SAGTPIT generates a test plan automatically is shown in Figure

6 and Table II. SAGTPIT starts its work from a test design activityAA (Figures 6(a) and6(b), and the columnStarting activity in Table II). The input objects of the activity are:the code to be tested (Code from DCSpecDCo) and the pointer to the code’s structure(PS from ICompositeC). The code’s structure itself contains units presented as objectsfrom DCStr: UA, UB, UC, UD, UE andUF. All these objects have the attributeImportanceequal to 1, which means that they require the standard level of testing. The output objectof the activity is the test design of the code (TD from DCSpecDTD). The top-downdepth-first way of ordering units and the parallel way of testing are selected as parametersof integration testing in the example. All the functionalities and attributes of the startingactivity are also included in the example.Design Methods, the only attribute specific fortest design activities, is among them. Most of the attributes are specified except:Deliver-ables, Effort, Preceding Activitiesand Succeeding Activities. The specified test designmethod,Error Guessing, is one of the black-box methods that relies upon intuition andexperience (Myers, 1979). The attributes of categorization, which are not shown in TableII, have the valuesTesting, Integration Testingand Design for Type, Subtypeand Kind,respectively (Figure 3).

After an engineer defines the starting activity and its attributes, SAGTPIT can generatethe detailed test plan automatically on an external request. The plan is generated for theselected parameters of testing. According to the selected way of ordering units, SAGTPITforms the following decomposition list: UA, UD, UB, UE, UC, UF. Starting from theattribute Requires Resources and taking into account the selected parallel way of testing,SAGTPIT proposes the maximum number of activities within a stage to be 2 (themaximum number of people), and this is accepted. The objectsCode (Figure 6(c)) andTD, and the starting activity are decomposed according to the decomposition list. Theircomponents are:CodeUAto CodeUF, TDUA to TDUF, andAAUA to AAUF, respectively.The activity network of the generated plan has four stages (Figure 6(d)).

All the attributes of the component activities of this network, except the attributes of

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 13: A knowledge-based test plan generator for incremental unit and integration software testing

203K

NO

WLE

DG

E-B

AS

ED

INT

EG

RA

TIO

NT

ES

TP

LAN

GE

NE

RA

TIO

N

Table II. The activity attributes of the test plan example

Starting Component activitiesactivity

Attributes AA AAUA AAUD AAUB AAUE AAUC AAUF

Requires Input Code, PS Code UA, Code UA, Code UA, Code UA, Code UA, Code UA,Code UB, Code UD, Code UD Code UD, Code UD, Code UDCode UC, Code UB, Code UB, Code UB, Code UB, Code UB,Code UD, PS Code UC, PS Code UC, Code UE, Code UE, Code UE,

Code UE, PS Code UC, PS Code UC, Code UC,Code UF, PS Code UF, PS

Produces Output TD TDUA TDUD TDUB TDUE TDUC TDUF

Requires SS, HP1, SS, HP1, SS, HP2, SS, HP3 SS, HP1, SS, HP2, SS, HP3Resources HP2, HP3, Mary John Mary John Mary John

Mary, John

Deliverables — — — — — — —

Responsible Mary Mary John Mary John Mary JohnPerson

Preceding — — AAUA AAUA AAUD, AAUB AAUD, AAUB AAUE, AAUCActivities

Succeeding — AAUD AAUE, AAUC AAUE, AAUC AAUF AAUF —Activities

Duration 12 3 3 3 3 3 3

Start Date 10/01/1997 10/01/1997 10/04/1997 10/04/1997 10/07/1997 10/07/1997 10/10/1997

Finish Date 10/12/1997 10/03/1997 10/06/1997 10/06/1997 10/09/1997 10/09/1997 10/12/1997

Effort — 3 3 3 3 3 3

Cost 2400 400 400 400 400 400 400

Design Methods Error Error Error Error Error Error ErrorGuessing Guessing Guessing Guessing Guessing Guessing Guessing

SS = Software Support; HP= Hardware Platform.

1998

JohnW

iley&

Sons,

Ltd.S

oftw

.T

est.

Ve

rif.R

elia

b.8,

191–211(1998)

Page 14: A knowledge-based test plan generator for incremental unit and integration software testing

204 Z. MIHAJLOVIC AND D. VELASEVIC

Figure 6. An example of generating a test plan: (a) starting activity; (b) starting input and output objects; (c)decomposition of objects (TD is decomposed analogously to Code); (d) activity hierarchy of the generated plan

categorization that are equal to those of the starting activity, are presented in Table II.The detailed plan occupies the columnsComponent activitiesin Table II. The componentsof CodeandTD are distributed among the component activities according to the appropriatedistribution algorithms for the top-down integration strategy. The objectPS is simplybrought to all the component activities. The objectSS (software support) is also brought

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 15: A knowledge-based test plan generator for incremental unit and integration software testing

205KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

to all the component activities. Since each component activity needs only one hardwareplatform, one of the hardware platform objects is assigned to each component activity:HP1, HP2 or HP3. The starting hardware platforms form a sequence from which thecomponent activities take the particular object in a circular fashion. Each componentactivity needs only one person to perform it. Assigning the starting resources of thepeople type to the component activities is similar to assigning hardware platforms. Theattribute Responsible Personof each component activity is set to be equal to the onlypeople resource from the attributeRequires Resourcesof the same activity. The attributesPreceding Activitiesand Succeeding Activitiesare filled according to the formed stagesof the component activities (Figure 6(d)). All the stages and component activities havethe same duration—3 days, computed by dividing the duration of the starting activity bythe number of stages. The start and finish dates of the component activities are computedon the basis of the start date of the starting activity and the duration of the componentactivities. Since each component activity is performed by one person, the effort of eachcomponent activity (in person-days) is set to be equal to the activity’s duration. Theattribute Cost of all the component activities is set to 400 and is computed by dividingthe cost of the starting activity by the number of the component activities. The attributeDesign Methodsof all the component activities is set to the same attribute of thestarting activity.

5.2 SAGTPIT and recursionSAGTPIT solves direct and indirect recursion automatically while it generates a detailed

test plan for integration testing. If a starting structure has one unit that calls itself, orseveral such units, i.e. direct recursion, SAGTPIT sets the attributeDescriptors of suchunits to Critical and the attributeImportanceof such units to 2. This is performed in thecase when these attributes have not been previously set by an engineer. Then SAGTPITgenerates a detailed test plan as if the starting structure has no direct recursion. As aconsequence, the detailed plan includes the testing activities for self-calling units as earlyas possible and treats these activities as those that require more testing than the standardunits. Such a detailed plan is consistent with the general expectation that the self-callingunits are harder to test than the ordinary ones.

Units in structures with indirect recursion form loops. To handle loops, SAGTPITdivides a starting structure into two parts: the part without the structure’s loops and thepart containing the loops. To illustrate the way in which SAGTPIT solves indirectrecursion, an example with a simple loop is explained. Handling structures with otherloops is performed in a similar way.

Simple loops have the following form: A1 calls A2, A2 calls A3, . . ., Ai calls A1, wherei = 2, 3, . . . They meet the additional condition that the structure’s part without the loopcalls only one unit in a loop, A1 say. The unit A1 is the entry point of the loop. If astarting structure has one such loop, SAGTPIT generates a detailed plan with twodecomposition levels. The first level covers two elements: a substitute for the looprepresented as a single unit and the structure’s part without the loop. Both elements aretreated in the standard way for ordinary structures, including the selected integrationstrategy. SAGTPIT declares the substitute as a critical unit. As a consequence, the activitythat corresponds to the whole loop is scheduled as early as possible within the level. Thesecond level covers the decomposition and elaboration of the loop. The simple loop is

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 16: A knowledge-based test plan generator for incremental unit and integration software testing

206 Z. MIHAJLOVIC AND D. VELASEVIC

broken at the entry point and is further considered as an ordinary structure with the entrypoint at the top. The bottom unit of this structure is not a real elementary unit since, tobe tested, it requires a stub instead of the broken link. The activities at the second levelof the detailed plan correspond to the units of the loop. SAGTPIT specifies these activitiesas ones that require more testing than others, as it does for direct recursion. It alsospecifies the superordinate activity of these activities in a consistent way.

6. EVALUATING SAGTPITThe main aim of developing SAGTPIT was to increase the productivity of planningintegration testing. The complete set of evaluation criteria for SAGTPIT, adapted fromthe work of Poston and Sexton (1992), includes:

I functionality;I productivity increase (the next section);I performance (the next section);I supported methods;I supported testing process models; andI hardware platform and operating system.

SAGTPIT functions well if it produces a good test plan for integration testing startingfrom a given task. It increases the productivity of integration testing enough if it causesthe productivity increase to exceed 50%—the value proposed by Poston and Sexton(1992). SAGTPIT has good performance if its response time in automatic generation ofa test plan for a benchmark structure is acceptable (Poston and Sexton, 1992). For thisevaluation, the response time for the selected benchmark structure with 10 units isestimated to be less than five minutes. SAGTPIT is acceptable if it implements standardmethods in planning integration testing. Standard methods are desirable because they donot require the additional cost of organizational changes needed for SAGTPIT’s introduc-tion. SAGTPIT is acceptable if it supports all the testing process models currently used inpractice. It should use a widespread hardware platform and corresponding operating system.

SAGTPIT fulfils all the criteria just introduced. It produces a good detailed plan forintegration testing that:

I contains all the components of a plan: the objective of testing, the scope, completioncriteria, the activity architecture, the order of unit integration, resources, responsi-bilities and schedule (Myers, 1979; Pressman, 1997; Gelperin and Hetzel, 1988);

I covers all the units of the structure under consideration; andI preserves the duration, cost and resource constraints specified in a given planning task.

A test plan automatically generated by SAGTPIT contains all the components of a plan.The example shown in Section 5.1 demonstrates this statement. A plan contains thefollowing components directly: the scope of testing related to effort, cost and timeconstraints, completion criteria, the activity architecture, the order of unit integration,resources, responsibilities and schedule. The objective of integration testing is implicitlyincorporated in the plan through the category attributes of the starting activity and itscomponent activities.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 17: A knowledge-based test plan generator for incremental unit and integration software testing

207KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

A test plan generated by SAGTPIT includes all the testing activities for the units of astructure since the decomposition list on the basis of which a starting activity is decom-posed encompasses all the units (Section 4).

When a global task of planning integration testing is given in the form of a startingactivity, this activity includes the task’s constraints considering its cost, duration andresources. SAGTPIT respects all these constraints in generating a detailed plan. Thedetailed plan’s duration and cost are equal to those of the starting activity. In paralleltesting, SAGTPIT exploits the maximum parallelism allowed by resources.

SAGTPIT implements standard incremental integration testing strategies in generatingtest plans: bottom-up and top-down testing. These incremental integration strategies ensurethat units are assembled into the whole program in an organized way and require lesseffort in producing scaffolding than the non-incremental (‘big-bang’) approach (Myers,1979). A detailed test plan generated by SAGTPIT is not worse than a plan created byan expert software engineer for the same adopted integration strategy.

SAGTPIT can be used as a generator of test plans in three testing process modelscurrently present in practice: the destruction, evaluation and prevention models (Gelperinand Hetzel, 1988). This flexibility is a consequence of the system’s ability to treat testdesign, implementation and execution activities separately or together in a single startingall-encompassing activity. The simplest way to use SAGTPIT in the destruction andevaluation models is to start planning from a given, all-encompassing activity that includestest design, implementation and execution. This activity should be scheduled after thecoding phase. When an engineer calls SAGTPIT, the system will automatically generatethe plan for the starting activity. In the destruction model, planning integration testingcan be performed immediately before or after the coding phase. In the evaluation model,planning testing is always performed before the coding phase.

The simplest way to apply SAGTPIT to the prevention model includes three steps. Inthe first step, it is necessary to define a test design activity. This activity should bescheduled after the software design phase. In the second step, a test implementationactivity is defined. This activity should be scheduled after the coding phase. In the thirdstep, it is necessary to define a test execution activity and schedule it after the activityfrom the second step. All three activities have the same input object that is relevant forautomatic generation of test plans: the appropriate software structure. In the preventionmodel, planning integration testing is performed immediately after the software designphase.

SAGTPIT requires a PC under Windows as hardware and software platforms—likeKappa-PC, the development tool in which it was implemented. The use of such widespreadplatforms does not introduce any additional cost for SAGTPIT’s adoption.

6.1. The productivity increase and performanceSAGTPIT’s criteria for productivity increase and performance are analysed by using an

analogy with the work presented by Bruckhauset al. (1996). To compare the productivityof planning integration testing with and without SAGTPIT, the effort of performing thistask in person-minutes is measured when SAGTPIT is used and when a test plan isproduced manually. The productivity of planning integration testing is computed as thenumber of units in a global task’s structure, divided by the effort (measured in person-minutes) needed for generating the task’s detailed test plan.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 18: A knowledge-based test plan generator for incremental unit and integration software testing

208 Z. MIHAJLOVIC AND D. VELASEVIC

Planning integration testing for structures with different numbers of units is considered.The programs analysed form three groups: real programs, the example presented in Section5.1, and artificial programs. These programs and their structure characteristics are shownin Table III. Among others, the structure characteristics include the number of nodes(units) and number of arcs (lines of control) of a structure (Pressman, 1997). Two realprograms shown in Table III are:Assistantand Expert system. Assistant is part of thefirst SAGTPIT’s prototype, implemented using a procedural programming paradigm.Expertsystemis a system that monitors fouling in a power boiler implemented in the CLIPSrules and C (Afganet al., 1995; Mihajlovic, 1997). They are both small programs.

The third group of programs considered comprises the artificial programs P1, P2, P3,P4, P5, P6, P7, P8 and P9, which are produced in a similar way to that presented bySolheim and Rowland (1993). A program’s structure has the form of a full, ordered tree,with each node of the tree corresponding to a unit. All the non-terminal units of thestructure have the same out-degree (fan-out) which determines the program’s degree. Forthe purpose of evaluating SAGTPIT, artificial programs with the following number ofunits are considered: 10, 21, 43, 91, 156, 511, 1023, 2047 and 4681. These numbersdeviate little from the following numbers: 10, 20, 50, 100, 200, 500, 1000, 2000 and5000. They cover a wide range of unit numbers that correspond to real, complex programs.The artificial programs in Table III have their degree within the range 2 to 10. The upperlimit of 10 is established taking into account a heuristic design rule for programs thatattempts to minimize structures with high fan-out (Pressman, 1997).

The best current practice of producing test plans manually includes two steps: theelaboration of a detailed test plan and creating the plan’s report. The most efficient wayto perform the second step of creating the plan’s report is to fill the previously preparedactivity forms with all the activity attributes for each activity in the detailed plan. Bothof the steps are performed by a software engineer and depend upon the number of unitsin a structure. To measure the effort needed to perform these two steps for differentstructures is rather impractical, but it can be done for a number of structures with smallsize (2 to 6 units). Based upon these selective results, it can be seen that the second step

Table III. Structure characteristics of the analysed programs

Program Group* Number of Number of Depth Width Min; max Min; maxnodes (units) arcs fan-out fan-in

Assistant R 56 84 6 31 1; 10 1; 8Expert system R 70 87 6 26 1; 22 1; 9Example E 6 6 3 3 1; 3 1; 2

P1 A 10 9 2 9 1; 9 1; 1P2 A 21 20 3 16 1; 4 1; 1P3 A 43 42 3 36 1; 6 1; 1P4 A 91 90 3 81 1; 9 1; 1P5 A 156 155 4 125 1; 5 1; 1P6 A 511 510 9 256 1; 2 1; 1P7 A 1023 1022 10 512 1; 2 1; 1P8 A 2047 2046 11 1024 1; 2 1; 1P9 A 4681 4680 5 4096 1; 8 1; 1

*Group: R — real; E — example; A — artificial.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 19: A knowledge-based test plan generator for incremental unit and integration software testing

209KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

takes the most effort of the two, and that the average effort to fill in one activity formis a constant value equal to 1 person-minute. Then the minimum effort in producing atest plan manually can be estimated taking into account the number of units in a structure:N. This effort does not include the elaboration of a detailed plan. The real effort is alwaysgreater than the minimum estimated one. For a starting all-encompassing activity, theestimated effort is: 3(N+1) person-minutes. This effort cannot be reduced by using astandard project management tool in creating the detailed plan’s report. As a consequence,the estimated effort is used in computing the productivity increase contributed bySAGTPIT.

The effort of planning integration testing when SAGTPIT is used, measured in minutes,includes three steps: entering a global task of planning into SAGTPIT (including a startingactivity and a structure), activating the appropriate command for automatic generation ofa test plan, and SAGTPIT’s response time in generating the plan. The third stepencompasses creating the plan’s report, too. This report corresponds to the filled activityforms of the detailed plan produced manually.

Table IV presents the effort and productivity results for the selected programs, withand without SAGTPIT. The following hardware platform was used: 486DX-100 PC with32 MB RAM. The productivity of planning integration testing with SAGTPIT for theprograms considered, ranges from 55 to 321 times greater than the productivity withoutSAGTPIT. It is obvious that SAGTPIT contributes to the productivity increase much morethan the specified value of the productivity increase criterion of 50%. Table IV alsoshows that the least productivity increase is gained for the program with the least numberof units. SAGTPIT’s response times are also presented in Table IV. For the program with10 units, the response time is 7 seconds. This response time greatly outperforms thespecified value of the performance criterion of 5 minutes.

Table IV. Effort and productivity results without and with SAGTPIT

Program Effort* Productivity Response time Effort* Productivity ProductivityM† M† (in seconds) A‡ A‡ increaseξ

Assistant 171 0.33 26 0.67 84 255Expert system 213 0.33 32 0.80 87 264Example 21 0.29 9 0.37 16 55P1 33 0.33 7 0.33 30 91P2 66 0.32 12 0.43 49 153P3 132 0.33 21 0.62 69 209P4 276 0.33 42 0.98 93 282P5 471 0.33 67 1.47 106 321P6 1536 0.33 260 4.92 104 315P7 3072 0.33 563 10.30 99 300P8 6144 0.33 1116 20.22 101 306P9 14046 0.33 2560 46.10 102 309

*Effort in person-minutes.†M — manually planning without SAGTPIT.‡A — automatically planning with SAGTPIT.ξ — Productivity increase= ProductivityA/ProductivityM.Starting activity: all-encompassing activity. Parameters of testing: bottom-up breadth-first and serial. The

attribute Importancefor all the units: 1.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 20: A knowledge-based test plan generator for incremental unit and integration software testing

210 Z. MIHAJLOVIC AND D. VELASEVIC

7. CONCLUSIONS

SAGTPIT, a knowledge-based system for the automatic generation of test plans forincremental unit and integration testing described in this article, is a new approach toautomation of testing. It automates the planning of testing and thus differs from othertools developed to automate testing that focus their attention on acquiring and executingtests. By automation, SAGTPIT significantly improves the planning of testing. Theproductivity of planning integration testing with SAGTPIT has been observed, for theprograms considered, to be at least 55 times greater than the productivity withoutSAGTPIT. SAGTPIT saves effort, time and cost in generating a plan. The effort savedby using SAGTPIT can be directed to the other steps of integration testing: test design,implementation and execution. The improvement of planning gained with SAGTPIT alsoimproves the whole management of software testing.

SAGTPIT is applicable to real planning of integration testing. A plan for integrationtesting generated by SAGTPIT is correct and free of errors. This plan can be used directlyin the further steps of testing: test design, implementation and execution. It can also berefined to suit any particular problem better. For example, a software engineer can prolongor shorten certain component activities of the plan. Then the engineer can use SAGTPITto check the data consistency of the refined plan. Under the assumption that the engineeraccepts all the proposals made by SAGTPIT, SAGTPIT ensures the data consistency ofthe plan by aggregating data from elementary to superordinate activities of the plan’shierarchy. Two main reasoning processes of SAGTPIT, maintaining data integrity andautomatic generation of test plans, enable the system’s applicability to real practice.

SAGTPIT can assist a software engineer in planning software project activities generally.The system’s capability of maintaining data integrity in activity hierarchies ensures thiskind of application. Future work on SAGTPIT should concentrate on tracking and controlof the plan when implemented and integrating the system into a software testing environ-ment that should try to cover other aspects of software integration testing.

AcknowledgementThe authors would like to thank the anonymous referees for their helpful comments and valuablesuggestions.

ReferencesAfgan, N., Radovanovic´, P. M. and Blokh, A. G. (1995) ‘An expert system for boiler surface fouling

assessment’, In K. Hanjalic´ and J. H. Kim (Eds),Expert Systems and Simulations in EnergyEngineering, Begel House, New York, U.S.A., pp. 219–224.

Anderson, C., von Mayrhauser, A. and Mraz, R. (1995) ‘On the use of neural networks to guidesoftware testing activities’,Proceedings of the International Test Conference,Washington DC,U.S.A., October 1995.

Bimson, K. D. and Boehm Burris, L. (1987) ‘The craft of engineering knowledge: some practicalinsights’, Proceedings of the 20th Hawaii International Conference on Systems Sciences, Vol. 1,pp. 460–469.

Bimson, K. D. and Boehm Burris, L. (1989) ‘Assisting managers in project definition: foundationfor intelligent decision support’,IEEE Expert, 4(2), 66–76.

Bruckhaus, T., Madhavji, N. H., Janssen, I. and Henshaw, J. (1996) ‘The impact of tools on softwareproductivity’, IEEE Software, 13(5), 29–38.

Chapman, D. (1982) ‘A program testing assistant’,Communications of the ACM, 25(9), 625–634.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)

Page 21: A knowledge-based test plan generator for incremental unit and integration software testing

211KNOWLEDGE-BASED INTEGRATION TEST PLAN GENERATION

Deason, W. H., Brown, D. B., Chang, K.-H. and Cross, J. H. (1991) ‘A rule-based software test datagenerator’,IEEE Transactions on Knowledge and Data Engineering, 3(1), 108–117.

Durkin, J. (1994)Expert Systems: Design and Development, Macmillan, New York, U.S.A.Fikes, R. and Kehler, T. (1985) ‘The role of frame-based representation in reasoning’,Communications

of the ACM, 28(9), 904–920.Fuggetta, A. (1993) ‘A classification of CASE technology’,Computer, 26(12), 25–38.Gelperin, D. and Hetzel, B. (1988) ‘The growth of software testing’,Communications of the ACM,

31(6), 687–695.Harrold, M. J. and Soffa, M. L. (1990) ‘Test Inc extended dataflow testing’,IEEE Software, 7(3), 57.Hoffman, D. M. and Strooper, P. (1991) ‘Automated module testing in Prolog’,IEEE Transactions

on Software Engineering, 17(9), 934–943.Howe, A. E., von Mayrhauser, A. and Mraz, R. T. (1997) ‘Test case generation as an AI planning

problem’, Automated Software Engineering, 4(1), 77–106.IEEE (1983) ANSI/IEEE STD 829–1983 Standard for Software Test Documentation, Institute of

Electrical and Electronics Engineers, New York, U.S.A.IEEE (1986a)ANSI/IEEE STD 1008–1987 Standard for Software Unit Testing, Institute of Electrical

and Electronics Engineers, New York, U.S.A.IEEE (1986b)ANSI/IEEE STD 1012–1986 Standard for Software Verification and Validation Plans,

Institute of Electrical and Electronics Engineers, New York, U.S.A.Lydiard, T. J. (1990) ‘Kappa-PC’,IEEE Expert, 5(5), 71–77.Mi, P. and Scacchi, W. (1990) ‘A knowledge-based environment for modeling and simulating

software engineering processes’,IEEE Transactions on Knowledge and Data Engineering, 2(3),283–294.

Mihajlovic, Z. and Velasˇevic, D. (1997) ‘An assistant for software project plans’,Info Science(Yugoslav journal), 5(5), 14–26.

Mihajlovic, Z. (1997) ‘An expert system for monitoring of fouling in a power boiler’,Proceedingsof the 41st Yugoslav Conference of ETRAN, Vol. 3, pp. 239–242 (in Serbian).

Miller, E. (1990) ‘ITTS integrated system for test coverage, regression’,IEEE Software, 7(3), 75.Myers, G. J. (1979)The Art of Software Testing, Wiley, New York, U.S.A.Poston, R. (1990) ‘T test-case generator’,IEEE Software, 7(3), 56.Poston, R. M. and Sexton, M. P. (1992) ‘Evaluating and selecting testing tools’,IEEE Software,

9(3), 33–42.Pressman, R. S. (1997)Software Engineering: A Practitioner’s Approach, 4th Edn, McGraw-Hill,

New York, U.S.A.Rolston, D. (1988)Principles of Artificial Intelligence and Expert Systems Development, McGraw-

Hill, New York, U.S.A.Sathi, A., Fox, M. S. and Greenberg, M. (1985) ‘Representation of activity knowledge for project

management’,IEEE Transactions on Pattern Analysis and Machine Intelligence, 7(5), 531–552.Sathi, A., Morton, T. E. and Roth, S. F. (1986) ‘Callisto: an intelligent project management system’,

AI Magazine, 7(4), 34–52.Solheim, J. A. and Rowland, J. H. (1993) ‘An empirical study of testing and integration strategies

using artificial software systems’,IEEE Transactions on Software Engineering, 19(10), 941–949.Velasevic, D., Bojic, D. and Novicic, D. (1997) ‘TESTGRAPH: software tool for automatic generation

of test case patterns’,Info Science (Yugoslav journal), 5(2), 4–20.Voas, J., Morell, L. and Miller, K. (1991) ‘Predicting where faults can hide from testing’,IEEE

Software, 8(2), 41–48.Wild, C., Zeil, S., Feng, G. and Chen, J. (1992) ‘Employing accumulated knowledge to refine test

descriptions’,Software Testing, Verification and Reliability, 2(2), 53–68.Withrow, C. (1990) ‘Error density and size in ADA software’,IEEE Software, 7(1), 26–30.

1998 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab.8, 191–211 (1998)