Tesv2 ML Book

Embed Size (px)

Citation preview

  • 7/28/2019 Tesv2 ML Book

    1/28

    LITTLE

    BOOK

    OF

    TESTING

    VOLUME II

    IMPLEMENTATIONTECHNIQUES

    JUNE 1998

    For additional information please contact the

    Software Program Managers Network

    (703) 521-5231 Fax (703) 521-2603

    E-Mail: [email protected]

    http://www.spmn.com

    SO FTWARE C O UN CIL

    A IRLIE

  • 7/28/2019 Tesv2 ML Book

    2/28

    i

    THE AIRLIE SOFTWARE COUNCIL

    This guidebook is one of a series of guidebookspublished by the Software Program Managers Network(SPMN). Our purpose is to identify best managementand technical practices for software development andmaintenance from the commercial software sector,and

    to convey these practices to busy program managersand practitioners. Our goal is to improve the bottom-line drivers of software development andmaintenancecost,productivity,schedule,quality,predictability,and user satisfaction.

    The Airlie Software Council was convened by a

    Department of the Navy contractor in 1994 as a focusgroup of software industry gurus supporting the SPMNand its challenge of improving software across themany large-scale,software-intensive systems within theArmy,Navy,Marine Corps,and Air Force. Councilmembers have identified principal best practices that

    are essential to managing large-scale softwaredevelopment and maintenance projects. The Council,which meets quarterly in Airlie,Virginia, is comprised ofsome 20 of the nation's leading software experts.

    These little guidebooks are written,reviewed,generallyapproved,and,if needed,updated by Council members.

    Your suggestions regarding this guidebook,or othersthat you think should exist,would be muchappreciated.

    This publication was prepared for the

    Software Program Managers Network4600 North Fairfax Drive, Suite 302Arlington, VA 22203

    The ideas and findings in this publication should not beconstrued as an official DoD position. It is published in the

    interest of scientific and technical information exchange.

    Norm BrownDirector, Software Program Managers Network

    Copyright 1998 by Computers & Concepts Associates

    This work was created by Computers & Concepts Associatesin the performance of Space and Naval Warfare SystemsCommand (SPAWAR) Contract Number N00039-94-C-0153for the operation of the Software Program ManagersNetwork (SPMN).

  • 7/28/2019 Tesv2 ML Book

    3/28

    The Software Program Managers Network (SPMN)provides resources and best practices for softwaremanagers. The SPMN offers practical,cost-effectivesolutions that have proved successful on projects ingovernment and industry around the world.

    This second volume in the testing series answers thequestion:How can I test software so that the end productperforms better, is reliable,and allows me,as a manager,tocontrol cost and schedule? Implementing the basic testingconcepts and ten testing rules contained in this bookletwill significantly improve your testing process and result inbetter products.

    It is not easy to change the way systems and software aretested. Change requires discipline,patience,and a firmcommitment to follow a predetermined plan. Managementmust understand that so-called silver bullets during testingoften increase risk,take longer,and result in a higher-costand lower-quality product. Often what appears to be thelongest testing process is actually the shortest,since

    products reach predetermined levels of design stabilitybefore being demonstrated.

    Managers can implement the concepts and rules in thisguidebook to help them follow a low-risk,effective testprocess. An evaluation of how you test software againstthese guidelines will help you identify project risks andareas needing improvement. By using the principles

    described in this guide,you can structure a test programthat is consistent with basic cost,schedule,quality,and usersatisfaction requirements.

    1

    THE PURPOSE OF THIS

    LITTLE BOOK

    Norm BrownExecutive Director

  • 7/28/2019 Tesv2 ML Book

    4/28

    INTRODUCTION

    3

    Testing by software execution is the end of thesoftware development process. When projects arediscovered to be behind schedule,a frequent worst-practice way out is to take shortcuts with test. This is aworst practice because effective test by software

    execution is essential to delivering quality softwareproducts that satisfy users requirements,needs,andexpectations. When testing is done poorly,defects thatshould have been found in test are found duringoperation,with the result that maintenance costs areexcessive and the user-customer is dissatisfied. When asecurity or safety defect is first found in operation, the

    consequences can be much more severe thanexcessive cost and user dissatisfaction.

    Although metrics show that the defect removalefficiency of formal,structured peer reviews istypically much higher than that of execution test, thereare certain types of errors that software-execution testis most effective in finding. Full-path-coverage unit test

    with a range of path-initiating variable values will finderrors that slip through code inspections. Tests of theintegrated system against operational scenarios andwith high stress loads find errors that are missed withformal,structured peer reviews that focus on smallcomponents of the system. This book describes testingsteps,activities,and controls that we have observed on

    projects.

    Michael W.EvansPresident

    Integrated Computer Engineering, Inc.

    Frank J.McGrath,Ph.D.Technical DirectorSoftware Program Managers Network

  • 7/28/2019 Tesv2 ML Book

    5/28

    5

    This section presents concepts that are basic tounderstanding the testing process. From the start it iscritical that managers,developers,and usersunderstand the difference between debugging andtesting.

    Debuggingis the process of isolating andidentifying the cause of a software problem,andthen modifying the software to correct theproblem. This guidebook does not addressdebugging principles.

    Testingis the process of finding defects in

    relation to a set of predetermined criteria orspecifications. The purpose of testing is toprove a system, software, or softwareconfiguration doesntwork, not that it does.

    There are two forms of testing:whi te box testi ngandblack box testi ng. An ideal test environment alternateswhite box and black box test activities, first stabilizing

    the design,then demonstrating that it performs therequired functionality in a reliable manner consistentwith performance,user,and operational constraints.

    Whi te box testi ngis conducted on code componentswhich may be software units,computer softwarecomponents (CSCs),or computer softwareconfiguration items (CSCIs). These tests exercise the

    internal structure of a code component,and include:

    BASIC TESTING CONCEPTS

    Execution of each statement in a codecomponent at least once

    Execution of each conditional branch in the codecomponent

    Execution of paths with boundary and out-of-bounds input values

    Verification of the integrity of internal interfaces

    Verification of architecture integrity across arange of conditions

    Verification of database design and structure

    White box tests verify that the software design is validand that it was built according to the specified design.White box testing traces to configuration management(CM)-controlled design and internal interfacespecifications. These specifications have beenidentified as an integral part of the configurationcontrol process.

    Black box testi ngis conducted on integrated,functional components whose design integrity hasbeen verified through completion of traceable whitebox tests. As with white box testing,thesecomponents include software units,CSCs,or CSCIs.

    Black box testing traces to requirements focusing onsystem externals. It validates that the software meetsrequirements without regard to the paths of execution

  • 7/28/2019 Tesv2 ML Book

    6/28

    7

    taken to meet each requirement. It is the type of testconducted on software that is an integration of codeunits.

    The black box testi ng processincludes:

    Validation of functional integrity in relation toexternal stimuli

    Validation of all external interfaces (includinghuman) across a range of nominal and anomalousconditions

    Validation of the ability of the system,software,or hardware to recover from or minimize theeffect of unexpected or anomalous external orenvironmental conditions

    Validation of the systems ability to address out-of-bound input,error recovery,communication,and stress conditions

    Black box tests validate that an integrated software

    configuration satisfies the requirements contained in aCM-controlled requirement or external interfacespecification.

    Ideally,each black box test should be preceded by awhite box test that stabilizes the design. You neverwant to try to isolate a software or system problem byexecuting a test case designed to demonstrate system

    or software externals.

    BASIC TESTING CONCEPTS

    (cont.)

  • 7/28/2019 Tesv2 ML Book

    7/28

    9

    Key to Ter ms:

    DT&EDevelopment Test andEvaluation

    HWCIHardwareConfiguration Item

    ORD-OperationalRequirements Document

    OT&EOperational Test andEvaluation

    SDFSoftware Design File

    SRSSoftware RequirementsSpecifications

    SSDDSystem SubsystemDesign Document

    SSSSystem SegmentSpecification

    SWDDSoftware DesignDocument

    products against criteria at the point where they areproduced,but also reduce the reworkrate from 44percent to 8 percent. Use of inspections as a form oftesting can reduce the cost of correction up to 100times. A complete test program consists of the

    following nine discrete testi ng levels,each feeding thenext:

    Figure 1 illustrates the testi ng process. Testing is acontinuum of verification and validation activitiesconducted continuously from the point where aproduct is defined until it is finally validated in aninstalled system configuration.

    Testing need not be performed on a computer. Projectsthat regularly apply structured inspections not only test

    Test Levels

    Level 1

    Level 2

    Level 3

    Level 4

    Level 5

    Level 6

    Level 7

    Level 8

    Test Activity

    Computer SoftwareUnit Testing

    CSCI IntegrationTesting

    CSCI QualificationTesting

    CSCI/HWCIIntegration Testing

    System Testing

    DT&E Testing

    OT&E Testing

    Site Testing

    Test Type

    Structured

    Inspections

    Non-Computer-Based

    TestWhite Box Testing

    White Box Testing

    Black Box Testing

    White Box Testing

    Black Box Testing

    Black Box Testing

    Black Box Testing

    Black Box Testing

    DocumentationBasis for Testing

    VariousLevel 0

    SDF

    SWDD

    SRS

    SSDD

    SSS

    User Manuals

    ORD or UserRequirements Document

    Transition Plan(Site Configuration)

    BASIC TESTING CONCEPTS

    (cont.)

    Figure 1. Test Levels

    Test Responsibility

    Inspection Team

    Test Focus

    Various

    Developer

    Independent Test

    Independent Test

    Independent Test

    Independent Test

    Acquirer Test Group

    Operational Test

    Site Installation Team

    Software Unit Design

    CSCI Design/Architecture

    CSCI Requirements

    System Design/Architecture

    System Requirements

    User Manual Compliance

    Operational Requirements

    Site Requirements

  • 7/28/2019 Tesv2 ML Book

    8/28

    11

    Level 0These tests consist of a set of structuredinspections tied to each product placed underconfiguration management. The purpose of Level 0 testsis to remove defects at the point where they occur,andbefore they affect any other product.

    Level 1These white box tests qualify the code againststandards and unit design specification. Level 1 teststrace to theSoftware Design Fi le (SDF)and are usuallyexecuted using test harnesses or drivers. This is theonly test level that focuses on code.

    Level 2These white box tests integrate qualified CSCs

    into an executable CSCI configuration. Level 2 teststrace to theSoftw are Design Document ( SWDD). Thefocus of these tests is the inter-CSC interfaces.

    Level 3These black box tests execute integratedCSCIs to assure that requirements of theSoftw areRequ i rements Specifi cati on (SRS)have beenimplemented and that the CSCI executes in an

    acceptable manner. The results of Level 3 tests arereviewed and approved by the acquirer of the product.

    Level 4These white box tests trace to theSystemSubsystem Design Document (SSDD). Level 4 testsintegrate qualified CSCIs into an executable systemconfiguration by interfacing independent CSCIs andthen integrating the executable software configuration

    with the target hardware.

    Level 5These black box tests qualify an executablesystem configuration to assure that the requirements ofthe system have been met and that the basic concept ofthe system has been satisfied. Level 5 tests trace to theSystem Segment Specif i cat i on (SSS). This test level

    usually results in acceptance or at least approval of thesystem for customer-based testing.

    Level 6These black box tests,planned and conductedby the acquirer,trace to the user and operator manualsand external interface specifications. Level 6 testsintegrate the qualified system into the operationalenvironment.

    Level 7These independent black box tests trace tooperational requirements and specifications. Level 7tests are conducted by an agent of the user to assurethat critical operational,safety,security,and otherenvironmental requirements have been satisfied and thesystem is ready for deployment.

    Level 8These black box tests are conducted by theinstallation team to assure the system works correctlywhen installed and performs correctly when connectedto live site interfaces. Level 8 tests trace to installationmanuals and use diagnostic hardware and software.

    BASIC TESTING CONCEPTS

    (cont.)

  • 7/28/2019 Tesv2 ML Book

    9/28

  • 7/28/2019 Tesv2 ML Book

    10/28

    TESTING QUESTIONS THAT

    MANAGERS SHOULD

    BE ABLE TO ANSWER

    TEN COMMANDMENTS

    OF TESTING

    15

    TESTING QUESTIONS THAT MANAGERS SHOULD BE ABLE TO

    ANSWER:

    1. Have I defined,and do I understand,my overallstrategy for software testing?

    2. Are the test planning requirements clearlydefined,consistent,assigned for development,and supportable by the staff responsible for theirimplementation?

    3. Are the test methods and techniques defined,arethey consistent with the strategy,and can they beimplemented in the software environment?

    4. Is each test activity traceable to a controlled setof requirements and/or design data?

    5. Are the configuration management control,andquality disciplines adequate to support the teststrategies,methods,and techniques,and are theyreally in place?

    6. Do I know when to quit?

    THETEN COMMANDMENTS OFTESTING

    1. Black box tests that validate requirements musttrace to approved specifications and be executedby an independent organization.

    2. Test levels must be integrated into a consistentstructure.

    3. Don't skip levels to save time or resources:testless at each level.

    4. All test products configurations,tools,data,andso forth,need CM during tests.

    5. Always test to procedures.6. Change must be managed.

    7. Testing must be scheduled,monitored,andcontrolled.

    8. All test cases have to trace to something that'sunder CM.

    9. You can't run out of resources.

    10. Always specify and test to criteria.

  • 7/28/2019 Tesv2 ML Book

    11/28

    17

    RULE 1: Tests Mu st Be Planned, Schedu led, and

    Contr ol led i f th e Pr ocess Is to Be Ef fect ive.

    Test planning is an essential managementfunction that defines the strategies and methods

    to be used to integrate,test,and validate thesoftware product.

    Test planning defines how the process ismanaged,monitored,and controlled.

    Planning of the software test environment is thebottom of a complex planning tree thatstructures and controls the flow of testing. The

    planning tree shows the products move fromdeveloper,through the software organization, tothe system organizations,and finally to theacquirer and the operational test organizations.

    Test planning is preparing for the future,whileTest Plan implementation is making sure wehave what we want when we get there.

    The test planning process must be consistentfrom one level to the next. The Test Plan must beconsistent with the CM Plan,which must beconsistent with the standards,and so on.

    The control discipline must be a completerepresentation of the Test Plan as adapted to the

    specific needs of the project to which it isapplied.

    THETEST PLANNING TREE

    Program Plandefines the programrequirements,obligations,strategies,constraints,and delivery requirements and commitment. It

    identifies test requirements to be completedprior to delivery.

    Test and Evalu ati on Management Planusuallytraces to user specifications. It sets the entiretesting strategy and practices.

    System Engin eer i ng Management Plan (SEMP)identifies engineering plans and methods,engineering standards,management processes,and systems for integration and testrequirements.

    System Test Plandefines the systems testingplan,strategies,controls,and testing processes.

    Test case requirements are discussed,and criteria

    for success or failure are identified. Softw are Developmen t Plandescribes what the

    software project must accomplish,how thesesoftware process and product standards must bemet,how the process must be managed andcontrolled,and what the criteria are for overallintegration, test,and completion.

    RULE 1

  • 7/28/2019 Tesv2 ML Book

    12/28

    19

    Softw ar e Test Plandefines software integrationand test plans,strategies,software test controlsand processes,test case requirements,and testcase completion criteria.

    Software Test Descri pti ondetails therequirements for individual test cases. Thisdescription serves as the test case designrequirement.

    Softw ar e Test Procedu retakes each test case andprovides a step-by-step definition of testexecution requirements traceable to a

    requirement specification. Software Test Case Defin i ti ondetails the test

    case design and specifies criteria for successfuland unsuccessful execution of a test step.

    Softw ar e Test Scenar ioidentifies specific testdata,data sources,interface address or location,critical test timing,expected response,and test

    step to take in response to test condition.

    Test Case Implemen ta ti onis the actual test caseready for execution.

    These plans may be rolled up into fewer individualdocuments,or expanded based on project size,numberof organizations participating,size of risk and contract,

    and customer and technical requirements of the testprogram.

    RULE 1

    (cont.)

    Figure 3. The Test Planning Tree

  • 7/28/2019 Tesv2 ML Book

    13/28

    21

    The test structure as documented in these plans mustbe consistent with the overall strategy usedgranddesign,incremental development,or evolutionaryacquisition.

    Test plan n ingis the joint responsibility of theorganization receiving the capabilityeither thehigher life cycle or the user organization that willultimately apply the productand theengineering organization that will build ormodify the product.

    The test planning process starts with a definition

    by the ultimate user of the system of the level ofrisk that is acceptable when the product isdelivered. The integration of the testing structuredisciplines helps assure the acceptability of theproduct when it reaches the operational testinglevels.

    Key guidelines for planning a test environment are:

    Improve probability that the delivered productswill meet the documented and perceivedrequirements,operational needs,andexpectations of the user of the system.

    Increase schedule and budget predictability byminimizing rework and redundant effort.

    Maintain traceability and operational integrity ofengineering products as they are produced and

    changed,and maintain the relationship betweenthe formal documentation and the underlyingengineering information.

    Assure that software is not debugged duringoperational testing because of test shortcuts orinadequacies at lower test levels.

    RULE 1

    (cont.)

  • 7/28/2019 Tesv2 ML Book

    14/28

    23

    RULE 2: Ex ecuti on Test Begins w it h Whi te Box

    Test of Code Units and Pr ogr esses thr ou gh a

    Ser ies of Black Box Tests of a Pr ogr essively

    I ntegr ated System i n Accor da nce w it h a

    Pr edef ined I ntegr at i on Test Bui ld Plan.

    Software test flow must be documented in plansand procedures.

    Traceability between test cases at each level mustbe maintained,allowing tests at one level toisolate problems identified at another.

    The integrity of the planned white-box,black-box

    test sequence must not be violated to meetbudget or schedule.

    Software cannot be adequately tested if there isno formal, controlled traceability of requirementsfrom the system level down to individual codeunits and test cases.

    Test levels (see Figure 1) must be defined so thatthey support partitioning that is consistent withthe projects incremental or evolutionaryacquisition strategy.

    In the event of a failure at any test level, it mustbe possible to back up a level to isolate thecause.

    All test levels must trace to CM-controlledengineering information,as well as to baselineddocumentation that has been evaluated through astructured inspection.

    A test level is not complete until all test cases arerun and a predetermined quality target isreached.

    RULE 2

  • 7/28/2019 Tesv2 ML Book

    15/28

    25

    RULE 3: All Components of a Test Must Be under

    CM Contr ol af ter They Ha ve Been Ver i f ied by

    Str uctu r ed Peer Review s.

    During test,CM must provide:

    A way to identify the information approved foruse in the project,the owners of the information,the way it was approved for CM control,and themost recent latest approved release

    A means to assure that,before a change is madeto any item under configuration control, thechange has been approved by the CM change

    approval process,which should include arequirement that all impacts are known before achange is approved

    An accessible and current record of the status ofeach controlled piece of information that isplanned for use by test

    A build of each release,from code units tointegration test

    A record of the exact content,status,and versionof each item accepted by CM, and the versionhistory of the item

    A means to support testing of operationalsoftware versions by exactly reproducing

    software systems for tests with a knownconfiguration and a controlled set of test results

    CM IMPLEMENTATION RULES DURING TESTING

    CM should be an independent,centralizedactivity,not buried within an engineering orassurance organization.

    CM can never be a bottleneck in the testschedule,because the process will not befollowed.

    CM can never be eliminated for any reason,including the need to meet budget or schedule.

    CM change control must receive reports of all

    suspected and confirmed problems duringtesting,and all problems must only be fixed inaccordance with a change control processmanaged by CM.

    CM must assure that changes to configuration-controlled items that do not follow the CMprocess cannot be made to systems under test,

    including patches to binary code. CM must not accept changes made to software

    until all related documentation and otherconfiguration-controlled items affected by thechange have also been updated.

    CM must have control of and responsibility for alltest builds from Level 2 and above.

    RULE 3

  • 7/28/2019 Tesv2 ML Book

    16/28

    27

    CM must ensure that all releases of integratedsoftware from CM during test are accompaniedby a current,complete,and accurate SoftwareVersion Description (SVD).

    CM must verify that all new releases of test tools,including compilers,are regression-tested prior torelease for use during testing.

    CM must process every suspected or observedproblem,no matter how small.

    CM must ensure that all tests trace to controlledinformation that has been approved through a

    binary quality gate.

    CM staff during integration and test must havethe technical skills necessary to generate eachintegration build delivered from CM forintegration test.

    RULE 4: Testi ng I s Designed to Pr ove a System

    DoesntWor k, Not to Pr ove I tDoes.

    A large,integrated software system cannot beexhaustively tested to execute each path over the

    full range of path-initiating conditions. Therefore,testing must consist of an integrated set of testlevels,as described in Figure 1. These testinglevels maximize the coverage of testing againstthe cost across all design and requirements.

    Criteria must be used to evaluate all testactivities.

    All tests executed that trace to a requirementmust be executed in three ways:

    Using a nominal data load and validinformation

    Using excessive data input rates in acontrolled environment

    Using a preplanned combination of nominaland anomalous data

    The ideal test environment is capable of driving asystem to destruction in a controlled fashion. Forexample,the data aids and data combinationsmust be varied until the system no longerexecutes in an acceptable manner. The point at

    RULE 3

    (cont.)RULE 4

  • 7/28/2019 Tesv2 ML Book

    17/28

    29

    RULE 5: All Sof tw ar e Mu st Be Tested Usi ng

    Frequent Bui lds of Contr o l led Softw ar e, w i th

    Each Bui ld Documented in SVDs and Change

    H istor ies Documented i n H eader Comments for

    Each Code Unit.

    Build selection and definition must be based onavailability of key test resources,software unitbuild and test schedules,the need to integraterelated architectural and data components into asingle test configuration,availability of requiredtesting environments and tools,and the

    requirement to verify related areas offunctionality and design.

    Builds must be ordered to minimize the need todevelop throw-away,test-driver software.

    Builds must not be driven by a need todemonstrate a capability by a certain date (aform of political build scheduling).

    Build definition must consider technical,schedule,and resource factors prior to thescheduling of a test session.

    Test cases used during frequent builds must beplanned (structured in a Test Plan,documented ina procedure),traceable to controlledrequirements or design,and executed using

    which system support becomes unacceptablemust be identified and documented.

    Tests must be run at all levels to test not onlynominal conditions but also all potential test

    failure conditions documented in controlledspecifications.

    This is critical even though the testenvironment may be difficult toestablish.

    No observed testing problem,however small,must ever be ignored or closed without

    resolution.

    All test case definitions must include success andfailure criteria.

    Tests must be made against scenarios (timedsequences of events) designed to cover a broadrange of real-world operational possibilities as

    well as against static system requirements. System test must be made on the operational

    hardware/operating system (OS) configuration.

    RULE 4

    (cont.)RULE 5

  • 7/28/2019 Tesv2 ML Book

    18/28

    31

    RULE 5

    (cont.)RULE 6

    controlled configurations,just as with builds thatare not frequent.

    Test support functions,such as CM integrationbuild and change approvals,must be staffed to a

    sufficient level and must follow processes so thatthey do not delay frequent integration builds andtests.

    The build planning must be dynamic and capableof being quickly redefined to address testing andproduct realities. For example,a key piece ofsoftware may not be delivered as scheduled;a

    planned external interface may not be availableas planned;a hardware failure may occur in anetwork server.

    Frequent builds find integration problems early,but they require substantial automation in CM,integration, and regression test.

    RULE 6: Test Tools M ust Only Be Bui lt o r

    Acqui r ed I f They Pr ovide a Cap abi l i ty Not

    Avai lab le Elsew her e, or I f They Pr ovide Contr o l ,

    Repr odu cib i l i ty , or Eff i c iency That Cannot Be

    Achi eved th r ough Oper at i onal or Live Test

    Sou r ces.

    Test tools must be used prior to the point atwhich the system is considered stable andexecuting in a manner consistent withspecifications.

    This requirement reflects the need to control

    test conditions and to provide a controlledexternal environment in which predictabilityand reproducibility can be achieved.

    Some test functions are best provided byautomated tools and can substantially lower thecost of regression test during maintenance.

    For example,automating key build inputsreduces operator requirements and the costof providing them during test sessions.

  • 7/28/2019 Tesv2 ML Book

    19/28

    33

    RULE 6

    (cont.)

    CLASSES OFTESTTOOLSTHAT EVERY PROGRAM SHOULD

    CONSIDER

    Simu la ti on When Test Is Done Outside theOperati onal Envi ronm entDrives system from

    outside using controlled information andenvironments. For example,it can be used toexchange messages with the system under test,or to drive a client/server application as thoughthere were many simultaneous users on manydifferent client workstations

    Emu lati on When Har dware In ter faces Are Not

    AvailableSoftware to perform exactly asmissing components in system configurations,such as an external interface

    InstrumentationSoftware tools thatautomatically insert code in software,and thenmonitors specific characteristics of the softwareexecution such as test coverage

    Test Case GeneratorIdentifies test cases to testrequirements and to achieve white box testcoverage

    Test Data Genera torDefines test informationautomatically,based on the test case or scenariodefinition

    Test Record an d PlaybackRecords keystrokesand mouse commands from the human tester,

    and saves corresponding software output forlater playback and comparison with saved outputin regression test

    Test Anal ysisEvaluates test results,reduces

    data,and generates test reports

    RULES FORTESTTOOLS

    Test tools must only be acquired and used if theydirectly trace to a need in the Test Plan and/orprocedures,or if they will significantly reducethe cost of regression test during the

    maintenance phase. All tools used and the data they act upon must

    be under CM control.

    Test tools must not be developed if there is aquality COTS test tool that meets needs.

    Plan test tool development,acquisition,anddeployment for completion well in advance ofany need for tool use item.

    Test tools that certify critical components mustbe certified using predefined criteria beforebeing used.

    Always try to simulate all external data inputs,including operator inputs,to reduce test cost and

    increase test reproducibility.

  • 7/28/2019 Tesv2 ML Book

    20/28

    35

    RULE 6

    (cont.)RULE 7

    QUESTIONS TO ASKWHEN DEFINING A TESTING

    ENVIRONMENT AND SELECTING TEST METHODS ORTOOLS

    1. Have the proposed testing techniques and toolsbeen proven in applications of similar size and

    complexity,with similar environments andcharacteristics?

    2. Does the test environment include thehardware/OS configuration on which thesoftware will be deployed,and will the test toolsexecute on this operational hardware/OS?

    3. Do the proposed test methods and tools give thecustomer the visibility into software quality thatis required by the contract?

    4. Is there two-way traceability from systemrequirements through design,to code units andtest cases?

    5. Have the organizational standard test processes,

    methods,and use of test tools been tailored tothe characteristics of the project?

    RULE 7: When Sof tw ar e Is Develop ed by M ult ip le

    Organiz at i ons, One Organi zat i on Mu st Be

    Responsib le for All I n tegr at i on Test of the

    Sof tw ar e Fol low ing Successfu l Whi te Box Testing

    by the Developi ng Or ganizat i on.

    Developm ent teststhose that test units andCSCs prior to integrationmust be theresponsibility of the developers.

    When development tests are complete,inspectionsmust be conducted to ensure theprocess was followed;the requirements of the

    Test Plan were successfully completed;thesoftware is documented adequately;the sourcecode matches the documentation;and thesoftware component is adequate to supportsubsequent test activities.

    All software builds that support test activitiessubsequent to development testing must be built

    under control of CM. Testing subsequent to development testing must

    be executed by an independent testingorganization. After development testing ends,anychanges to controlled configurations must beevaluated and approved prior to incorporation inthe test configuration.

  • 7/28/2019 Tesv2 ML Book

    21/28

    RULE 7

    (cont.)

    37

    RULES FOR ORGANIZING TEST ACTIVITIES

    A testing organization must be an integral part ofthe program and software project plans from thebeginning,with staffing and resources allocated

    to initiate early work. While some testing will be done internally by the

    engineering organizations,the white box andblack box test activities documented in Test Plansand procedures must be executed by anindependent test organization.

    Test organizations must be motivated to find

    problems rather than to prove the productsunder test work.

    Test organizations must have adequateguaranteed budget and schedule to accomplishall test activities without any unplannedshortcuts.

    A tester must never make on-the-fly changes toany test configuration. A testers role is to findand report,not to fix. During testing,the role ofthe tester is to execute the test case. All fixesshould be made by an organization responsiblefor maintaining the software,after processing inaccordance with the change control process ofthe program.

    Test organizations may delegate tasks such as testplanning,test development,tool development,ortool acquisition to engineering,but responsibility,control,and accountability must reside with thetest organization.

    The relationship between testing and otherprogram and software organizations must beclear,agreed to,documented,and enabledthrough CM.

  • 7/28/2019 Tesv2 ML Book

    22/28

    39

    RULE 8: Sof tw ar e Testing I s Not Fini shed unti l the

    System H as Been Str essed above I ts Ra ted

    Cap aci ty and Cr i t ical Factor sClosur e of Saf ety

    H aza r ds, Secur i ty Contr o ls, Oper at i onal Factor s

    H ave Been Demonstr ated.

    Software functional testing and system testing musthave a step that demonstrates anticipated realistic,not artificial,system environments. In this step, thesystem must be loaded to the maximum possiblelevel as constrained by external data services. Also,potential failure conditions must be exercised instressed environments.

    All safety hazard control features and securitycontrol features must be exercised in nominal andstressed environments to ensure that the controlsthey provide remain when the system isoverloaded.

    The test data rates must be increased until the

    system is no longer operationally valid. Test scenarios that are used in stress testing must

    define nominal system conditions. The specifics ofthe scenarios must then be varied,increasing in anorderly manner operator positions,database loads,external inputs,external outputs,and any otherfactors that would affect loading and,indirectly,

    response time.

    During execution of these tests,the availablebandwidth must be flooded with low-prioritytraffic to ascertain if this traffic precludes high-priority inputs.

    Specific tests must exist to assure the capability of afully loaded or overloaded system to gracefullydegrade without the failure of some hardwarecomponent. Specific shutdown options of thesystem must be explicitly exercised in a stressedenvironment to assure their viability.

    On certain systems that have potential external

    input sources,the ability of an unauthorized usergaining access in periods of system stress must betested. These penetration tests must be consistentwith the security policy of the program.

    Satisfaction of predefined quality goals must be thebasis for completion of all test activities, includingindividual test step execution, cases described

    through a test procedure,test level completion,andcompletion and satisfaction of a Test Plan. Thesequality goals must include stress and critical factorcomponents.

    All software that impacts safety,or software that ispart of the Trusted Computing Base segment of asecure application,must be tested to assure that

    RULE 8

  • 7/28/2019 Tesv2 ML Book

    23/28

    41

    security and hazard controls perform as required.All testing of critical components must beexecuted in nominal and stressed environmentsto assure that controls remain in place in periodsof system stress.

    Free play testing must only be used when thedesign has been verified through Level 2 or 4testing or when requirements have beenvalidated through Level 3 or 6 testing. Free playtests of unstable design or nonvalidatedrequirements are not always productive.

    Black box tests of requirements (Levels 3,5,6,7,or 8) must be defined,or at least validated,bysomeone with operational or domain experienceso that the tests reflect the actual userequirements of the capability. Traceabilitybetween test cases and requirements must bemaintained.

    All test procedures and cases must be defined toidentify what the software does,what it doesn'tdo,and what it can't do.

    All critical tests,especially those that relate tocritical operational safety or securityrequirements,must be executed at 50 percent ofthe rated system capacity,100 percent of the

    rated capacity,and 150 percent of the ratedcapacity.

    Documentation of failures must include sufficientinformation to evaluate and correct the problem.Knowledge of the stress load the system wasunder can be used to identify points at whichoperational risk is excessive due to systemloading.

    RULE 8

    (cont.)

  • 7/28/2019 Tesv2 ML Book

    24/28

    43

    RULE 9: A Robust Testi ng Pr ogr am Redu ces

    Pr ogram Acqu i s i t ion Risks .

    System acquisition is becoming increasingly morecomplex. More comprehensive testing strategiesthat can identify defects earlier in thedevelopment process are required.

    ACTIVITIESTHAT CAN SIGNIFICANTLY REDUCE PROGRAM

    RISKS WHEN PROPERLY TIED TO THETEST PLAN:

    Ensure that the Program Plan addresses thetesting strategy for the entire acquisition lifecycle. It must consider schedule,cost,andperformance objectives.

    Ensure that requirements traceability and the teststrategy builds confidence in the programdevelopment process during each phase.

    Develop an exhaustive Test Plan that retainsrequirements traceability from requirements

    documentation through product delivery. TheTest Plan should define a testing strategy fromcradle to grave while bolstering credibility toprogram performance capabilities. This Plan is anessential element of any software developmentproject.

    Establish a Configuration Management Plan. The

    CM Plan is key to a successful testing strategy.

    It helps to ensure that all requirements areincorporated,tracked,tested,and verified at eachdevelopment stage.

    Establish a Quality Assurance Program. The QAProgram verifies that requirements andconfiguration management processes supporttesting plan objectives,and that systemarchitecture and engineering requirements arebeing achieved.

    Ensure visibility and monitoring of theseindependent but highly integrated activities to

    judiciously reduce programmatic risks.ISSUES AND CONCERNSTHAT MUST BE RAISED BY PROGRAM

    MANAGERS, RISKOFFICERS, ANDTEST PERSONNEL:

    Has a CM Program been established to trackdocumentation and data that is shared across themultilevel organization?

    Have requirements documents been incorporatedinto the CM Program?

    Has a requirements traceability matrix beendeveloped?

    Has a testing strategy been developed thatdefines test processes from program conceptionto system delivery?

    RULE 9

  • 7/28/2019 Tesv2 ML Book

    25/28

    45

    Does the testing strategy incorporate the use ofcommercial best practices into the Test Plan?

    Does the Test Plan define quality gates for eachdevelopment step that requires either formal orinformal testing?

    Does the Test Plan identify testability criteria foreach requirement?

    Are processes in place to monitor and controlrequirements changes and incorporation of thosechanges into the Test Plan?

    Are all activities integrated throughout the

    development process to facilitate continuousfeedback of testing results and consideration ofthose results in determining program status?

    How does the consumption of program reservesduring testing factor into program schedule andcost updates? How is the consumption ofprogram reserves during testing recognized?

    Has Quality Assurance verified that alldevelopment processes and procedures complywith required activities?

    RULE 9

    (cont.)RULE 10

    RULE 10: All New Releases Used i n a Test

    Conf igu r at i on Mu st Be Regr essio n-Tested Pr io r

    to Use.

    Regression testing is the revalidation of

    previously proven capabilities to assure theircontinued integrity when other componentshave been modified.

    When a group of tests is run,records must bekept to show which software has been executed.

    These records provide the basis for schedulingregression tests to revalidate capabilities prior to

    continuing new testing. Every failure in a regression test must be

    addressed and resolved prior to the execution ofthe new test period.

    Regression test environments must duplicate tothe maximum extent possible the environmentused during the original test execution period.

    Regression tests must be a mix of white box andblack box tests to verify known design areas aswell as to validate proven capability.

    All regression tests must use components that arecontrolled by configuration management.

  • 7/28/2019 Tesv2 ML Book

    26/28

    47

    Testing is no longer considered a stand-alone and end-of-the-process evolution to be completed simply as anacquisition milestone. Rather, it has become a highlyintegral process that complements and supports otherprogram activities while offering a means to

    significantly reduce programmatic risks. Early defectidentification is possible through comprehensivetesting and monitoring. Effective solutions andmitigation strategies emerge from proactive programmanagement practices once risks have been identified.

    Successful programs have demonstrated that integratedtesting activities are key to program success with

    minimal associated risks.

    SUMMARYRULE 10

    (cont.)

    Regression testing must be a normal part of alltesting activities. It must be scheduled,planned,budgeted for,and generally supported by thetesting organization. Lets just run it to see if itworksis not a valid regression-testing strategy.

  • 7/28/2019 Tesv2 ML Book

    27/28

    49

    INDEX

    A

    Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    B

    Black box test . . . . . . . 4-7,8,10-11,15,22,36,40,45C

    Configuration management . 5,7,14,15,16,23,24-26,

    . . . . . . . . . . . . . . . . . . . . . . . 30,33,35,37,42-43,45

    Critical test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    D

    Debugging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4,21

    Development test . . . . . . . . . . . . . . . . . . . . . . . . . 35

    I

    Inspection. . . . . . . . . . . . . . . . . . . . . . . . . . . 8-10,35

    P

    Program Plan . . . . . . . . . . . . . . . . . . . . . . . 17,18,42

    Q

    Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . 43-44

    R

    Regression testing . . . . . . . . . . . . . . . . . . . . 31,45-46

    Rework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9,21

    Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13,42,43

    Risk management . . . . . . . . . . . . . . . . . . . . . . . 12-13

    S

    Software Design Document. . . . . . . . . . . . . . . . . 8-10

    Software Design File . . . . . . . . . . . . . . . . . . . . . . 8-10

    Software Development Plan . . . . . . . . . . . . . . . 17,18

    Software Requirements Specification . . . . . . . . . 8-10Software Test Case Definition . . . . . . . . . . . . . . 18,19

    Software Test Case Implementation . . . . . . . . . 18,19

    Software Test Description. . . . . . . . . . . . . . . . . 18,19

    Software Test Procedure. . . . . . . . . . . . . . . . . . 18,19

    Software Test Scenario . . . . . . . . . . . . . . . . 18,19,38

    Software Version Description . . . . . . . . . . . . . . 26,29

    Stress testing. . . . . . . . . . . . . . . . . . . . . . . . . . . 38-41

    System Engineering Management Plan . . . . . . . 17,18

    System Segment Specification . . . . . . . . . . . . . 8-9,11

    System Subsystem Design Document . . . . . . . . . 8-10

  • 7/28/2019 Tesv2 ML Book

    28/28

    INDEX

    (cont.)

    T

    Test and Evaluation Management Plan . . . . . . . 17,18

    Test build. . . . . . . . . . . . . . . . . . . . . . . . 22,26,29-30

    Test environment. . . . . . . . . . . . . . . . . . 16,27-28,34

    Test organization. . . . . . . . . . . . . . . . . . . . . . . . 36-37

    Test Plan. . . . . . . . 16,17,18,19,29,33,35,39,42,44

    Test planning. . . . . . . . . . . . . . . . . . . . . 14,16-21,37

    Test planning tree. . . . . . . . . . . . . . . . . . . . . . . 16-19

    Test support functions. . . . . . . . . . . . . . . . . . . . . . 30

    Test tools. . . . . . . . . . . . . . . . . . . . . . . . . . . 26,31-34

    Testing definition. . . . . . . . . . . . . . . . . . . . . . . . . 4,8

    Testing levels. . . . . . . . . . . . . . 8-11,15,25,27,28,40

    Testing process. . . . . . . . . . . . . . . . . . . . . . . . . . . 8-9

    W

    White box test. . . . . . . . 4-5,7,8,10,22,32,35,36,45

    THE AIRLIE SOFTWARE COUNCIL

    Victor Basili University of Maryland

    Grady Booch RationalNorm Brown Software Program Managers Network

    Peter Chen Chen & Associates, Inc.

    Christine Davis Texas Instruments

    Tom DeMarco The Atlantic Systems Guild

    Mike Dyer Lockheed Martin Corporation

    Mike Evans Integrated Computer Engineering, Inc.

    Bill Hetzel Qware

    Capers Jones Software Productivity Research, Inc.

    Tim Lister The Atlantic Systems Guild

    John Manzo 3Com

    Lou Mazzucchelli Gerard Klauer Mattison & Co., Inc.

    Tom McCabe McCabe & Associates

    Frank McGrath Software Focus, Inc.

    Roger Pressman R.S. Pressman & Associates, Inc.

    Larry Putnam Quantitative Software Management

    Howard Rubin Hunter College, CUNY

    Ed Yourdon American Programmer