70
1 Unified Software Development Process (Implementation and Test) Core Workflow – Implementation.................................3 Introduction..................................................3 Concepts......................................................4 Workflow Detail – Structure the implementation model..........5 Activity - Structure the implementation model...............5 Workflow Detail – Plan the Integration........................5 Activity – Plan system Integration..........................5 Workflow details – Implement a component......................5 Activity – Implement Component..............................6 Activity – Fix a Defect..................................... 6 Activity – Perform Unit Tests...............................6 Activity – Review Code...................................... 7 Workflow details – Integrate Each Subsystem...................8 Activity – Integrate Subsystem..............................8 Workflow Detail – Integrate the system........................8 Core Workflow - Test..........................................10 Introduction.................................................11 Concepts related to Test Workflow............................12 Concepts - Quality...........................................12 Product Quality............................................ 12 Process Quality............................................ 13 Measuring Quality.......................................... 13 Measuring Product Quality..................................14 Concepts: Quality Dimensions.................................16 Concepts – The Life Cycle of Testing.........................17 Concepts – Key Measures of Test..............................19 Coverage Measures.......................................... 19 Requirements-based test coverage...........................19 Code-based test coverage...................................19 Quality Measures........................................... 20 Concepts – Types of Tests....................................24 Concepts – Stages In Test....................................26 Unit Test.................................................. 26 Integration Test........................................... 26 System Test................................................ 26

Software Testing Workflow

Embed Size (px)

DESCRIPTION

Software Testing Workflow: This document is very useful for all Software Testing Professionals. You can get clear concept of Software Testing / Quality Assurance.Thanks,Kapil Samadhiya

Citation preview

Page 1: Software Testing Workflow

1

Unified Software Development Process(Implementation and Test)

Core Workflow – Implementation.....................................................................3Introduction..................................................................................................................................3Concepts......................................................................................................................................4Workflow Detail – Structure the implementation model............................................................5

Activity - Structure the implementation model.......................................................................5Workflow Detail – Plan the Integration.......................................................................................5

Activity – Plan system Integration..........................................................................................5Workflow details – Implement a component...............................................................................5

Activity – Implement Component...........................................................................................6Activity – Fix a Defect............................................................................................................6Activity – Perform Unit Tests..................................................................................................6Activity – Review Code...........................................................................................................7

Workflow details – Integrate Each Subsystem............................................................................8Activity – Integrate Subsystem................................................................................................8

Workflow Detail – Integrate the system......................................................................................8Core Workflow - Test......................................................................................10

Introduction................................................................................................................................11Concepts related to Test Workflow...........................................................................................12Concepts - Quality.....................................................................................................................12

Product Quality......................................................................................................................12Process Quality......................................................................................................................13Measuring Quality.................................................................................................................13Measuring Product Quality....................................................................................................14

Concepts: Quality Dimensions..................................................................................................16Concepts – The Life Cycle of Testing.......................................................................................17Concepts – Key Measures of Test.............................................................................................19

Coverage Measures................................................................................................................19Requirements-based test coverage.........................................................................................19Code-based test coverage......................................................................................................19Quality Measures...................................................................................................................20

Concepts – Types of Tests.........................................................................................................24Concepts – Stages In Test..........................................................................................................26

Unit Test................................................................................................................................26Integration Test......................................................................................................................26System Test............................................................................................................................26Acceptance Test.....................................................................................................................27

Concepts – Performance Test....................................................................................................27Concepts – Structure test...........................................................................................................27Concepts – Acceptance Test......................................................................................................28

Formal Acceptance Testing...................................................................................................28Informal Acceptance Testing.................................................................................................28Beta Testing...........................................................................................................................29

Concepts – Test Automation and Tools.....................................................................................30

Page 2: Software Testing Workflow

2

Function.................................................................................................................................30White-box versus Black-box.................................................................................................30

Workflow Detail – Plan Test.....................................................................................................32Activity Plan Test..................................................................................................................32How to Use Test Manager in Activity – Plan Test................................................................38

Workflow Detail – Design Test.................................................................................................39Activity – Design Test...........................................................................................................39Identify and Structure Test Procedures..................................................................................41Review and Assess Test Coverage........................................................................................43

Workflow detail - Implement Test............................................................................................44Activity – Implement Test.....................................................................................................44Activity Design Test Packages and Classes..........................................................................46Activity Implement Test Components and Subsystems........................................................47

Workflow Detail – Execute Test In Integration Test Stage.......................................................47Activity Execute Test............................................................................................................47

Workflow Detail – Execute Test in System Test Stage.............................................................49

Page 3: Software Testing Workflow

3

Core Workflow – Implementation

Introduction

The purpose of implementation is: o Plan the system integrations required in each iteration. Our approach to this is

Incremental, which results in a system that is implemented as a succession of small and manageable steps.

o To define the organization of the code, in terms of implementation subsystems organized in layers,

Page 4: Software Testing Workflow

4

o To implement classes and objects in terms of components (source files, binaries, executables, and others),

o To test the developed components as units, and o To integrate the results produced by individual implementers (or teams), into an

executable system. Implementation is the focus during the construction iterations Implementation is also done during elaboration to create the executable architectural baseline

and during transition to handle late defects such as those found when beta releasing the system.

It is must be maintained through out the software life cycle.

Concepts

BuildA build is an operational version of a system or part of a system that demonstrates a subset of the capabilities provided in the final product.

Software IntegrationThe term "integration" refers to a software development activity in which separate software components are combined into a whole. Integration is done at several levels and stages of the implementation:

o Integrating the work of a team working in the same implementation subsystem, before releasing the subsystem to system integrators.

o Integrating subsystems into a complete system. The Unified Process approach to integration is that the software is integrated incrementally. Incremental integration means that code is written and tested in small pieces, and combined into a working whole, by adding one piece at a time.It is important to understand that integration occurs (at least once) within each and every iteration. An iteration plan defines which use cases to design, and thus which classes to implement. The focus of the integration strategy is to determine the order in which classes are implemented, and combined.

Stubs

A stub is a component (or complete implementation subsystem) containing functionality for testing purposes. When you use an incremental integration strategy you select a set of components to be integrated into a build. These components may need other components to be able to compile the source code, and execute the tests. This is specifically needed in integration test, where you need to build up test specific functionality that can act as stubs for things not included or not yet implemented. There are two styles used here:

o Stubs that are simply "dummies" with no other functionality than being able to return a pre-defined value.

o Stubs that are more intelligent and can simulate a more complex behavior. The second style should be used with discretion, because it takes more resources to implement. So you need to be sure it adds value. You may end up in situations where your stubs also need to be carefully tested, which is very time consuming.

Page 5: Software Testing Workflow

5

Workflow Detail – Structure the implementation model

Activity Inputs From Resulting ArtifactsStructure the implementation model

Design Model Software Architecture Document

Implementation Model

Activity - Structure the implementation model

Purpose:

To establish the structure in which the implementation will reside. To assign responsibilities for Implementation Subsystems and their contents.

Design Packages will have corresponding Implementation Subsystems, which will contain one or more components and all related files needed to implement the component. The mapping from the Design Model to the Implementation Model may change as each Implementation Subsystem is allocated to a specific layer in the architecture. Note that both classes and possibly design subsystems in the Design Model are mapped to components in the Implementation Model - although not necessarily one to one

Workflow Detail – Plan the Integration

Activity Inputs From Resulting ArtifactsPlan System Integration Design Model

Use Case Realization Integration Build Plan

Activity – Plan system Integration

Purpose

To plan the system integration

Steps Identify Subsystems Define build sets Define series of builds Evaluate the Integration build plan

Workflow details – Implement a componentActivity Inputs From Resulting ArtifactsImplement Component Design Model

Test Cases Test Procedures

Component

Page 6: Software Testing Workflow

6

Workload Analysis Document

Fix a Defect Change Request ComponentPerform Unit Test ComponentReview Code Component Review Record

Activity – Implement Component

Activity – Fix a Defect

Steps

Stabilize the Defect

The first step is to stabilize the defect (i.e. a symptom), to make it occur reliably. If you can't make the defect occur reliably, it will be almost impossible to locate the fault.

Then try to narrow down the test case by identifying which of the factors in the test case make the defect occur, and which factors are irrelevant for the defect. To find out if a factor is irrelevant, execute the test case and change the factor in question. If the defect still occurs, this factor can probably be eliminated.

If successful, you should finish with at least one test case that causes the defect to occur, and also some idea of what factors are related to the occurrence of the defect.

Locate the Fault

The next step is to execute the test cases that cause the defect to occur and try to identify where in the code the source of the fault is. Examples of ways to locate a fault are:

Narrow down the suspicious region of the code. Test a smaller piece of the code; remove a piece of the code, and rerun the test cases. Continue to remove pieces of code, as long as the defect still occurs. Eventually you will have identified where the fault can be found.

Use the debugger to step through the code line-by-line, and monitor the values of interesting variables.

Let someone else review the code.

Fix the Fault

When the fault has been located, it is time to fix it. This should be the easy part. Here are some guidelines to keep in mind:

Make sure you understand the problem and the program before you make the fix. Fix the problem, not the symptom; the focus should be on fixing the underlying problem

in the code.

Page 7: Software Testing Workflow

7

Make one change at a time, because fixing faults is in itself an error-prone activity, it is important to implement the fixes incrementally, to make it easy to locate where any new faults are occurring from.

When the fault has been fixed, add a special test case that verifies this particular fault.

Activity – Perform Unit Tests

Purpose:

To verify the specification of a unit. To verify the internal structure of a unit.

Execute Unit Test

To execute unit test, the following steps should be followed:

Set-up the test environment to ensure that all the needed elements  (hardware, software, tools, data, etc.) have been implemented and are in the test environment.

Initialize the test environment to ensure all components are in the correct initial state for the start of testing.

Execute the test procedures.

Note: executing the test procedures will vary depending upon whether testing is automated or manual, and whether test components are needed either as drivers or stubs. 

Automated testing: The test scripts created during the Implement Test step are executed.

Manual execution: The structured test procedures developed during the Structure Test Procedure activity are used to manually execute the test

Evaluate Execution of test

The execution of testing ends or terminates in one of two conditions:

Normal: all the test procedures (or scripts) execute as intended. If testing terminates normally, then continue with Verify Test Results:

Abnormal or premature: the test procedures (or scripts) did not execute completely or as intended. When testing ends abnormally, the test results may be unreliable. The cause of termination needs to be identified, corrected, and the tests re-executed before additional test activities are performed.

Page 8: Software Testing Workflow

8

If testing terminates abnormally, continue with Recover from Halted Tests, below.

Verify Test Results

Upon the completion of testing, the test results should be reviewed to ensure that the test results are reliable and reported failures, warnings, or unexpected results were not caused by external influences (to the target-of-test), such as improper set-up or data.  

If the reported failures are due to errors identified in the test artifacts, or due to problems with the test environment, the appropriate corrective action should be taken and the testing re-executed.  For additional information, see "Recover From Halted Tests" below.

If the test results indicate the failures are genuinely due to the target-of-test, then the Execute Test Activity is complete and the next activity is to Evaluate Test.

Recover from halted test

There are two major types of halted tests:

Fatal errors - the system fails (network failures, hardware crashes, etc.) Test Script Command Failures - specific to automated testing, this is when a test script cannot

execute a command (or line of code).

Both types of abnormal termination to testing may exhibit the same symptoms:

Unexpected actions, windows, or events occur while the test script is executing Test environment appears unresponsive or in an undesirable state (such as hung or crashed).

To recover from halted tests, do the following:

Determine the actual cause of the problem Correct the problem Re-set-up test environment Re-initialize test environment Re-execute tests

Activity – Review Code

Page 9: Software Testing Workflow

9

Workflow details – Integrate Each Subsystem

Activity Inputs From Resulting ArtifactsIntegrate Subsystem Components (from

implementer X) Components (from

implementer Y)

Build Implementation

Subsystem

Activity – Integrate Subsystem

Purpose To integrate the components in an implementation subsystem, then deliver the

implementation subsystem for system integration.

Explanation

Subsystem integration proceeds according to the Artifact: Integration Build Plan, in which the order of component and subsystem integration has been planned.

It is recommended that you integrate the implemented classes (components) incrementally bottom-up in the compilation-dependency hierarchy.

At each increment you add one, or a few components to the system. If two or more implementers are working in parallel on the same subsystem, their work is

integrated through a subsystem integration workspace, into which the implementers deliver components from their private development workspaces, and from which the integrator will construct builds.

If a team of several individuals works in parallel on the same subsystem, it is important that the team members share their results frequently, not waiting until late in the process to integrate the team's work.

Workflow Detail – Integrate the system

Activity Inputs From Resulting ArtifactsIntegrate System Implementation Subsystems

(new versions)Build

Activity – Integrate system

Purpose: To integrate the implementation subsystems piecewise into a build

Steps

Accept Subsystems and Produce Intermediate Builds

Page 10: Software Testing Workflow

10

When this activity begins, implementation subsystems have been delivered to satisfy the requirements of the next (the 'target') build described in the Artifact: Integration Build Plan, recalling that the Integration Build Plan may define the need for several builds in an iteration. Depending on the complexity and number of subsystems to be integrated, it is often more efficient to produce the target build in a number of steps, adding more subsystems with each step, and producing a series of intermediate 'mini' builds - thus, each build planned for an iteration may, in turn, have its own sequence of transient intermediate builds. These are subjected to a minimal integration test (usually a subset of the tests described in the Integration Build Plan for this target build) to ensure that what is added is compatible with what already exists in the system integration workspace. It should be easier to isolate and diagnose problems using this approach. The integrator accepts delivered subsystems incrementally into the system integration workspace; in the process resolving any merge conflicts. It is recommended that this be done bottom-up with respect to the layered structure, making sure that the versions of the subsystems are consistent, taking imports into consideration. The increment of subsystems is compiled and linked into an intermediate build, which is then provided to the tester to execute a minimal system integration test.

This diagram shows a build produced in three increments. Some subsystems are only needed as stubs, to make it possible to compile and link the other subsystems, and provide the essential minimal run-time behavior.

The final increment of a sequence produces the target build, as planned in the Integration Build Plan. When this has been minimally tested, an initial or provisional baseline is created for this build - invoking the Activity: Create Baselines in the Configuration Management workflow. The build is now made available to the tester for complete system testing. The nature and depth of this testing will be as planned in the Integration Build Plan, with the final build of an iteration being subjected to all the tests defined in the Iteration Test Plan.

Page 11: Software Testing Workflow

11

Core Workflow - Test

Page 12: Software Testing Workflow

12

Introduction

The purposes of testing are:

To verify the interaction between objects. To verify the proper integration of all components of the software. To verify that all requirements have been correctly implemented. To identify and ensure defects are addressed prior to the deployment of the software.

Explanation:

In many organizations, software testing accounts for 30 to 50 percent of software development costs.

Yet most people believe that software is not well tested before it is delivered. This contradiction is rooted in two clear facts.

o First, testing software is enormously difficult. The different ways a given program can behave are unquantifiable.

o Second, testing is typically done without a clear methodology and without the required automation or tool support.

A well-conceived methodology and use of state-of-the-art tools, can greatly improve the productivity and effectiveness of the software testing.

For "safety-critical" systems where a failure can harm people (such as air-traffic control, missile guidance, or medical delivery systems), high-quality software is essential for the success of the system produced.

For a typical MIS system, this situation is not as painfully obvious, but the impact of a defect can be very expensive.

Well-performed tests, initiated early in the software lifecycle, will significantly lower the cost of completing and maintaining the software.

It will also greatly reduce the risks or liabilities associated with deploying poor quality software, such as poor user productivity, data entry and calculation errors, and unacceptable functional behavior.

Many MIS system are "mission-critical", that is, companies cannot fulfill their functions and experience massive losses when failures occur. For example: banks, or transportation companies. Mission-critical systems must be tested using the same rigorous approaches used for safety-critical systems.

Page 13: Software Testing Workflow

13

Concepts related to Test Workflow

Concepts - Quality

Introduction

If a question is asked – What is quality? – The common answers that you get are :

o I don’t know how to describe it – but I’ll know what I see it.o Meeting requirements

Quality is not a single dimension, but many The actual definition (as per unified software development process). The characteristic identified by the following:

o Satisfies or exceeds an agreed upon set of requirements, and o Assessed using agreed upon measures and criteria, and o Produced using an agreed upon process.

Achieving quality is not simply "meeting requirements" or producing a product that meets user needs, or expectations, etc.

Rather quality includes identifying the measures and criteria to demonstrate the achievement of quality

The implementation of a process to ensure that the product created by the process, has achieved the desired degree of quality (and can be repeated and managed).

There are four major aspects of quality, which needs to be understood. These are as follows:

Product Quality Process Quality Measuring Quality Evaluating Quality

Product QualityProduct quality is the quality of the product being produced by the process. In software development the product is the aggregation of many artifacts, including:

Deployed, executable code (application, system, etc.), perhaps the most visible of the artifacts, for it is typically this artifact for which the project existed. That is, this is the primary product that provides value to the customer (end-users, shareholders, stakeholders, etc.).

Deployed non-executable artifacts, including artifacts such as user manuals and course materials.

Non-deployed executables, such as the implementation set of artifacts including the test scripts and development tools created to support implementation.

Non-deployed, non-executed artifacts such as the implementation plans, test plans, and various models.

Page 14: Software Testing Workflow

14

Process QualityProcess quality refers to the degree to which an acceptable process, including measurements and criteria for quality, has been implemented and adhered to in order to produce the artifacts.Software development requires a complex web of sequential and parallel steps. As the scale of the project increases, more steps must be included to manage the complexity of the project. All processes consist of product activities and overhead activities. Product activities result in tangible progress toward the end product. Overhead activities have an intangible impact on the end product, and are required for the many planning, management, and assessment tasks.The objectives of measuring and assessing process quality are to:

Manage profitability and resources Manage and resolve risk Manage and maintain budgets, schedules, and quality Capture data for process improvement

Process quality is measured not only to the degree to which the process was adhered to, but also to the degree of quality achieved in the products produced by the process.

To aid in your evaluation of the process and product quality, the Rational Unified Process has included pages such as:

Activity: a description of the activity to be performed and the steps required to perform the activity.

Work Guideline: techniques and practical advice useful for performing the activity. Artifact Guidelines and Checkpoints: information on how to develop, evaluate, and use the

artifact. Templates: models or prototypes of the artifact that provide structure and guidance for

content.

Measuring Quality The measurement of Quality, whether Product or Process, requires the collection and analysis

of information usually stated in terms of measurements and metrics. Measurements are made primarily to gain control of a project, and therefore be able to

manage it. They are also used to evaluate how close or far we are from the objectives set in the plan in

terms of completion, quality, and compliance to requirements, etc. Metrics are used to attain two goals, knowledge and change (or achievement):

o Knowledge goals: they are expressed by the use of verbs like evaluate, predict, monitor. You want to better understand your development process. For example, you may want to assess product quality, obtain data to predict testing effort, monitor test coverage, or track requirements changes.

o Change or achievement goals: these are expressed by the use of verbs such as increase, reduce, improve, or achieve. You are usually interested in seeing how things change or improve over time, from an iteration to another, from a project to another.

All metrics require criteria to identify and to determine the degree or level at which of acceptable quality is attained.

Page 15: Software Testing Workflow

15

The acceptance criteria may be stated in many ways and may include more than one measure. Common acceptance criteria may include the following measures:

o Defect counts and / or trends, such as the number of defects identified, fixed, or that remain open (not fixed).

o Test coverage, such as the percentage of code, or use cases planned or implemented and executed (by a test). Test coverage is usually used in conjunction with the defect criteria identified above).

o Performance, such as the time required for a specified action (use case, operation, or other event) to occur. This is criteria is commonly used for Performance testing, Fail over and recovery testing, or other tests in which time criticality is essential.

o Compliance. This criteria indicates the degree to which an artifact or process activity / step must meet an agreed upon standard or guideline.

o Acceptability or satisfaction. This criterion is usually used with subjective measures, such as usability or aesthetics.

Measuring Product Quality Stating the requirements in a clear, concise, and testable fashion is only part of achieving

product quality. It is also necessary to identify the measures and criteria that will be used to identify the

desired level of quality and determine if it has been achieved. Measuring the product quality of an executable artifact is achieved using one or more

measurement techniques, such as: o reviews / walkthroughs o inspection o execution

Measuring Process Quality

The measurement of Process Quality is achieved by collecting both knowledge and achievement measures.

o The degree of adherence to the standards, guidelines, and implementation of an accepted process.

o Status / state of current process implementation to planned implementation. o The quality of the artifacts produced (using product quality measures described

above). Measuring process quality is achieved using one or more measurement techniques, such as:

o progress - such as use cases demonstrated or milestones completed o variance - differences between planned and actual schedules, budgets, staffing

requirements, etc. o product quality measures and metrics (as described in Measuring Product Quality

section above)

Evaluating Quality

Page 16: Software Testing Workflow

16

Throughout the product lifecycle, to manage quality, measurements and assessments of the process and product quality are performed. The evaluation of quality may occur when a major event occurs, such as at the end of a phase, or may occur when an artifact is produced, such as a code walkthrough. Described below are the different evaluations that occur during the lifecycle.Milestones and Status Assessments Inspections, Reviews, Walkthroughs

Milestones and Status AssessmentsEach phase and iteration in the Rational Unified Process results in the release (internal or external) of an executable product or subset of the final product under development, at which time assessments are made for the following purposes:

Demonstrate achievement of the requirements (and criteria) Synchronize expectations Synchronize related artifacts into a baseline Identify risks

Major milestones occur at the end of each of the four Rational Unified Process phases and verify that the objectives of the phase have been achieved. There are four major Milestones:

Minor milestones occur at the conclusion of each iteration and focus on verifying that the objectives of the iteration have been achieved. Status assessments are periodic efforts to assess ongoing progress throughout an iteration and/or phase.

Inspections, Reviews, and WalkthroughsInspections, Reviews, and Walkthroughs are specific techniques focused on evaluating artifacts and are a powerful method of improving the quality and productivity of the development process. Conducting these should be done in a meeting format, with one worker acting as a facilitator, and a second worker recording notes (change requests, issues, questions, etc.).The three kinds of efforts are defined as follows: Review: A formal meeting at which an artifact, or set of artifacts are presented to the user,

customer, or other interested party for comments and approval. Inspection: A formal evaluation technique in which artifacts are examined in detail by a

person or group other than the author to detect errors, violations of development standards, and other problems.

Walkthrough: A review process in which a developer leads one or more members of the development team through a segment of an artifact that he or she has written while the other members ask questions and make comments about technique, style, possible errors, violation of development standards, and other problems

Managing Quality in Unified Software Development Process.

Managing Quality is done for the following purposes: To identify appropriate indicators (metrics) of acceptable quality. To identify appropriate measures to be used in the evaluation and assessment of quality.

Page 17: Software Testing Workflow

17

To identify and appropriately address issues affecting quality as early and effectively as possible.

Managing Quality is implemented throughout all workflows, phases, and iterations in the Unified Process. In general, managing quality throughout the lifecycle is to implement, measure, and assess both process quality and product quality. Below, are highlighted some of the efforts expended in each workflow to manage quality:

Managing quality in the Requirements workflow includes the analysis of the requirements artifact set for consistency (between artifact standards and other artifacts), clarity (clearly communicates information to all shareholders, stakeholders, and other workers), and precision (appropriate level of detail and accuracy).

In the Analysis & Design workflow, managing quality includes assessment of the design artifact set, including the consistency of the design model, its translation from the requirements artifacts, and its translation into the implementation artifacts.

In the Implementation workflow, managing quality includes assessing the implementation artifacts and evaluating the source code / executable artifacts against the appropriate requirements, design, and test artifacts.

The Test workflow is highly focused towards the management of quality, as most of the efforts expended in the workflow address the purposes of managing quality identified above.

The Environment workflow, like test, includes many efforts addressing the purposes of managing quality. Here, you can find guidance on how to best configure your process to meet your needs.

Managing quality in the Deployment workflow includes assessing the implementation and deployment artifacts, and evaluating the executable and deployment artifacts against the appropriate requirements, design, and test artifacts needed to deliver the product to the end-customer.

The Project Management workflow includes the overview of many efforts for managing quality, including the review and audits to assess the implementation, adherence, and progress of the development process.

Concepts: Quality Dimensions

When our focus turns to the discussion of testing to identify quality, there is no single perspective of what quality is or how it is measured.

In Unified Process, we address this issue by stating that Quality has the following dimensions:

o Reliability: software robustness and reliability (resistance to failures, such as crashes, memory leaks, etc.), resource usage, and code integrity and structure (technical compliance to language and syntax).

o Function: ability to execute the specified use cases as intended and required. o Performance: the timing profiles and operational characteristics of the target-of-

test. The timing profiles include the code’s execution flow, data access, function calls, and system calls. Operational characteristics for performance include those characteristics related to production load, such as response time, operational

Page 18: Software Testing Workflow

18

reliability (MTTF – Mean Time to Failure), and operational limits such as load capacity or stress.

Concepts – The Life Cycle of Testing

In the software development lifecycle, software is refined through iterations. In this environment, the testing lifecycle must also have an iterative approach with each build

being a target for testing. Additions and refinements are made to the tests that are executed for each build,

accumulating a body of tests, which are used for regression testing at later stages. This approach implies that it causes reworking the tests throughout the process, just as the

software itself is revised. There is no frozen software specification and there are no frozen tests.

This iterative approach gives a high focus on regression test. Most tests of iteration X are used as regression tests in iteration X+1. In iteration X+2, you would use most tests from iteration X and iteration X+1 as regression

tests, and the same principle would be followed in subsequent iterations. Because the same test is repeated several times, it is well worth the effort to automate the

tests. It becomes necessary to effectively automate your tests to meet your deadlines. Look at the lifecycle of testing without the rest of the project in the same picture. This is the

way the different activities of testing are interconnected if you view them in a non-iterative view:

The testing lifecycle. This lifecycle has to be integrated with the iterative approach, which means that each

iteration will have a test cycle following that pattern.

Page 19: Software Testing Workflow

19

 

Execution is both execution of the new tests and regression tests using previous tests. The testing lifecycle is a part of the software lifecycle; they should start at the same time. If not started early enough, the tests will either be deficient, or cause a long testing and bug-

fixing schedule to be appended to the development schedule, which defeats the goals of iterative development.

Furthermore, the test planning and design activities can expose faults or flaws in the application definition. The earlier these are resolved, the lower the impact on the overall schedule.

One of the major tasks in evaluation is to measure how complete the iteration is by verifying what requirements have been implemented.

The ways in which you will perform tests will depend on several factors: o your budgeto your company policyo risk toleranceo and your staff.

How much you invest in testing depends on how you evaluate quality and tolerate risk in your particular environment.

Page 20: Software Testing Workflow

20

Concepts – Key Measures of Test

Introduction

The key measures of a test include coverage and quality. Test coverage is the measurement of testing completeness, and is based on:

o The coverage of testing, expressed by the coverage of test requirements and test cases

o The coverage of executed code.

Coverage Measures The most commonly used coverage measures are:

o Requirements-based (verification of use cases) and o Code-based test coverage (execution of all lines of code)

If requirement-based coverage is applied, test strategies are formulated in terms of how much of the requirements are fulfilled by the system.

If code-based coverage is applied, test strategies are formulated in terms of how much of the source code has been executed by tests. This type of test coverage strategy is very important for safety-critical systems.

Both measures can be derived manually (equations given below), or may be calculated by test automation tools.

Requirements-based test coverage Requirements-based test coverage is measured several times during the test life cycle and provides the identification of the test coverage at a milestone in the testing life cycle (such as the planned, implemented, executed, and successful test coverage). Test coverage is calculated by the following equation:

o Test Coverage = T(p,i,x,s) / RfT

where:T is the number of Tests (planned, implemented, executed, or successful) as expressed as test procedures or test cases.

o RfT is the total number of Requirements for Test.

Code-based test coverage Code-based test coverage measures how much code has been executed during the test, compared to how much code there is left to execute. Code coverage can either be based on control flows (statement, branch, or paths) or data flows. In control-flow coverage, the aim is to test lines of code, branch conditions, paths through the code, or other elements of the software's flow of control. In data-flow coverage, the aim is to test that data states remain valid through the operation of the software, for example, that a data element is defined before it is used.Code-based test coverage is calculated by the following equation:

Test Coverage = Ie / TIicwhere:Ie is the number of items executed expressed as code statements, code

Page 21: Software Testing Workflow

21

branches, code paths, data state decision points, or data element names.TIic is the total number of items in the code.

Quality Measures While the evaluation of test coverage provides the measure of testing completion, an evaluation of defects discovered during testing provides the best indication of software quality. Defects analysis means to analyze the distribution of defects over the values of one or more the parameters associated with a defect.Defect analysis provides an indication of the reliability of the software.For defect analysis, there are four main defect parameters commonly used:

o Status the current state of the defect (open, being fixed, closed, etc.). o Priority the relative importance of this defect having to be addressed and resolved. o Severity the relative impact of this defect. The impact to the end-user, an organization,

third parties, etc. o Source where and what is the originating fault that results in this defect, or what

component will be fixed to eliminate the defect.

Defect counts can be reported as a function of time, creating a Defect Trend diagram or report, defect counts can be reported as a function of one or more defect parameters, like severity or status, in a Defect Density report. These types of analysis provide a perspective on the trends or distribution of defects that reveal the software's reliability, respectively.

For example, it is expected that defect discovery rates will eventually diminish as the testing and fixing progresses. A threshold can be established below which the software can be deployed. Defect counts can also be reported based on the origin in the implementation model, allowing detection of "weak modules", "hot spots", parts of the software that keep being fixed again and again, indicating some more fundamental design flaw.

Defects included in an analysis of this kind have to be confirmed defects. Not all reported defects report an actual flaw, as some may be enhancement requests, out of the scope of the project, or describe an already reported defect. However, there is value to looking at and analyzing why there are many defects being reported that are either duplicates or not confirmed defects.

Defect Reports The Unified Process provides defect evaluation in the form of three classes of reports:

o Defect distribution (density) reports allow defect counts to be shown as a function of one or two defect parameters.

o Defect age reports are a special type of defect distribution report. Defect age reports show how long a defect has been in a particular state, such as Open. In any age category, defects can also be sorted by another attribute, like Owner.

o Defect trend reports show defect counts, by status (new, open, or closed), as a function of time. The trend reports can be cumulative or non-cumulative.

o Test results and progress reports show the results of test procedure execution over a number of iterations and test cycles for the application-under-test.

Page 22: Software Testing Workflow

22

Many of these reports are valuable in assessing software quality. The usual test criteria include a statement about the allowable numbers of open defects in particular categories, such as severity class. This criterion is easily checked with a defect distribution evaluation. By filtering or sorting on test requirements, this evaluation can be focused on different sets of requirements.

To be effective producing reports of this kind normally requires tool support.

Defect Density Reports

Defect Status Versus Priority

Each defect should be given a priority; usually it is practical to have four priority levels: 1. Resolve immediately 2. High priority 3. Normal queue 4. Low priority

Criteria for a successful test could be expressed in terms of how the distribution of defects over these priority levels should look. For example, to a successful test criteria might be no Priority 1 defects and fewer than five Priority 2 defects are open. A defect distribution diagram, such as the following, should be generated: 

 It is clear that the criterion has not been met. Note that this diagram needs to include a filter to show only open defects as required by the test criterion.

Defect Status Versus Severity

Defect severity reports show how many defects there are of each severity class (for example: fatal error, major function not performed, minor annoyance).

Defect Status Versus Location in the Implementation Model

Defect source reports show distribution of defects on elements in the implementation model.

Page 23: Software Testing Workflow

23

Defect Aging Reports

Defect age analyses provide good feedback on the effectiveness of the testing and the defect removal activities. For example, if the majority of older, unresolved defects are in a pending-validation state, it probably means that not enough resources are applied to the re-testing effort.

Defect Trend Reports

Trend reports identify defect rates and provide a particularly good view of the state of the testing. Defect trends follow a fairly predictable pattern in a testing cycle. Early in the cycle, the defect rates rise quickly. Then they reach a peak and fall at a slower rate over time. 

 To find problems, the project schedule can be reviewed in light of this trend. For example, if the defect rates are still rising in the third week of a four-week test cycle, the project is clearly not on schedule.This simple trend analysis assumes that defects are being fixed promptly and that the fixes are being tested in subsequent builds, so that the rate of closing defects should follow the same profile as the rate of finding defects. When this does not happen, it indicates a problem with the defect-resolution process; the defect fixing resources or the resources to re-test and validate fixes might be inadequate. 

Page 24: Software Testing Workflow

24

 The trend reflected in this report shows that new defects are discovered and opened quickly at the beginning of the project, and that they decrease over time. The trend for open defects is similar to that for new defects, but lags slightly behind. The trend for closing defects increases over time as open defects are fixed and verified. These trends depict a successful effort.

If your trends deviate dramatically from these, they may indicate a problem and identify when additional resources may need to be applied to specific areas of development or testing.When combined with the measures of test coverage, the defect analyses provide a very good assessment on which to base the test completion criteria.

Performance Measures

Several measures are used for assessing the performance behaviors of the target-of-test and focus on capturing data related to behaviors such as response time, timing profiles, execution flow, operational reliability and limits. Primarily, these measures are assessed in the Evaluate Test activity, however, there are performance measures that are used during the Execute Test activity to evaluate test progress and status.The primary performance measures include:

o Dynamic monitoring - real-time capturing and display of the status and state of each test script being executed during the test execution.

o Response Time / Throughput - measurement of the response times or throughput of the target-of-test for specified actors, and / or use cases.

o Percentile Reports - percentile measurement / calculation of the data collected values. o Comparison Reports - differences or trends between two (or more) sets of data

representing different test executions. o Trace Reports - details of the messages / conversations between the actor (test script) and

the target-of-test.

Dynamic Monitoring

Dynamic monitoring provides real-time display / reporting, typically in the form of a histogram or graph. The report is used to monitor or assess performance test execution during test execution by displaying the current state, status, and progress the test scripts being executed.

Page 25: Software Testing Workflow

25

For example, in the above histogram, we have 80 test scripts executing the same use case. In this display, 14 test scripts are in the Idle state, 12 in the Query, 34 in SQL Execution, 4 in SQL Connect, and 16 in the Other state. As the test progresses, we would expect to see the number of scripts in each state change. The displayed output would be typical of a test execution that is executing normally and is in the middle of the execution. However, if during test execution, test scripts remain in one state or are not showing changes, this could indicate a problem with the test execution or the need to implement or evaluate other performance measures.

Concepts – Types of Tests

There is much more to testing software than testing only the functions, interface, and response time characteristics of a target-of-test. Additional tests must focus on characteristics / attributes such as the target-of-test's:

o Integrity (resistance to failure) o Ability to be installed / executed on different platforms o Ability to handle many requests simultaneously o ...

In order to achieve this, many different tests are implemented and executed, each with a specific test objective. Each focused on testing only one characteristic or attribute of the target-of-test.

Quality Dimension Type of Test  

Reliability Integrity test: Tests which focus on assessing the target-of-test's robustness (resistance to failure) and technical compliance to language, syntax, and resource usage. This test is implemented and executed against different target-of-tests, including units and integrated units.

Page 26: Software Testing Workflow

26

Structure test: Tests that focus on assessing the target-of-test's adherence to its design and formation. Typically, this test is done for web-enabled applications ensuring that all links are connected, appropriate content is displayed, and there is no orphaned content.

Function Configuration test: Tests focused on ensuring the target-of-test functions as intended on different hardware and / or software configurations. This test may also be implemented as a system performance test. Function test: Tests focused on verifying the target-of-test functions as intended, providing the required service(s), method(s), or use case(s). This test is implemented and executed against different target-of-tests, including units, integrated units, application(s), and systems. Installation test: Tests focused on ensuring the target-of-test installs as intended on different hardware and / or software configurations and under different conditions (such as insufficient disk space or power interrupt). This test is implemented and executed against application(s) and systems. Security test: Tests focused on ensuring the target-of-test, data, (or systems) is accessible to only those actors intended. This test is implemented and executed various targets-of-test. Volume test: Testing focused on verifying the target-of-test ability to handle large amounts of data, either as input and output or resident within the database. Volume testing includes test strategies such as creating queries that [would] return the entire contents of the database, or have so many restrictions that no data is returned, or data entry of the maximum amount of data in each field.

Performance Benchmark test: A type of performance test that compares the performance of a [new or unknown] target-of-test to a known, reference-workload and system. Contention test: Tests focused on verifying the target-of-test's can acceptably handle multiple actor demands on the same resource (data records, memory, etc.). Load test: A type of performance test to verify and assess acceptability of the operational limits of a system under varying workloads while the system-under-test remains constant. Measurements include the

Page 27: Software Testing Workflow

27

characteristics of the workload and response time. When systems incorporate distributed architectures or load balancing, special tests are performed to ensure the distribution and load balancing methods function appropriately. Performance profile: A test in which the target-of-test's timing profile is monitored, including execution flow, data access, function and system calls to identify and address performance bottlenecks and inefficient processes. Stress test: A type of performance test that focuses on ensuring the system functions as intended when abnormal conditions are encountered. Stresses on the system may include extreme workloads, insufficient memory, unavailable services / hardware, or diminished shared resources.

Concepts – Stages In Test

Testing is usually applied to different types of targets in different stages of the software's delivery cycle. These stages progress from testing small components (unit testing) to testing completed systems (system testing).

Unit Test Unit test, implemented early in the iteration, focuses on verifying the smallest testable

elements of the software. Unit testing is typically applied to components in the implementation model to verify that

control flows and data flows are covered and function as expected. These expectations are based on how the component participates in executing a use case,

which you find from sequence diagrams for that use case. The Implementer performs unit test as the unit is developed. The details of unit test are

described in the Implementation workflow.

Integration Test Integration testing is performed to ensure that the components in the implementation model

operate properly when combined to execute a use case. The target-of-test is a package or a set of packages in the implementation model. Often the packages being combined come from different development teams.Integration

testing exposes incompleteness or mistakes in the package's interface specifications.

System Test System testing is done when the software is functioning as a whole, or when well-defined

subsets of its behavior are implemented. The target, in this case, is the whole implementation model for the system.

Page 28: Software Testing Workflow

28

Acceptance Test Acceptance testing is the final test action prior to deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by the

end-users to perform those functions and tasks the software was built to do.

Concepts – Performance Test

Included in Performance Testing are the following types of tests:

Benchmark testing - Compares the performance of new or unknown target-of-test to a known reference standard such as existing software or measurement(s).

Contention test: - Verifies the target-of-test can acceptably handle multiple actor demands on the same resource (data records, memory, etc.).

Performance profiling - Verifies the acceptability of the target-of-test's performance behavior using varying configurations while the operational conditions remain constant.

Load testing - Verifies the acceptability of the target-of-test's performance behavior under varying operational conditions (such as number of users, number of transactions, etc.) while the configuration remains constant.

Stress testing - Verifies the acceptability of the target-of-test's performance behavior when abnormal or extreme conditions are encountered, such as diminished resources or extremely high number of users.

Concepts – Structure test

Web-based applications are typically constructed using a series of documents (both HTML text documents and GIF/JPEG graphics) connected by many static links, and a few active, or program-controlled links. These applications may also include "active content", such as forms, Java scripts, plug-in-rendered content, or Java applications. Frequently this active content is used for output only, such as for audio or video presentation. However, it may also be used for as a navigation aid, helping the user traverse the application (web-site). This free-form nature of the web-based applications (via its links), while being a great strength, is also a tremendous weakness, as structural integrity can easily be damaged.Structure testing is implemented and executed to verify that all links (static or active) are properly connected. These tests include:

Verifying that the proper content (text, graphics, etc.) for each link is displayed. Different types of links are used to reference target-content in web-based applications, such as bookmarks, hyperlinks to other target-content (in the same or different web-site), or hot-spots. Each link should be verified to ensure that the correct target-content is presented to the user.

Ensuring there are no broken links. Broken links are those links for which the target-content cannot be found. Links may be broken for many reasons, including moving, removing, or renaming the target-content files. Links may also be broken due to the use of improper syntax, including missing slashes, colons, or letters.

Verifying there is no orphaned content. Orphaned content are those files for which there is no "inbound" link in the current web-site, that is, there is no way to access or present the

Page 29: Software Testing Workflow

29

content. Care must be taken to investigate orphaned content to determine the cause - is it orphaned because it is truly no longer needed? Is it orphaned due to a broken link? Or is it accessed by a link external to the current web-site. Once determined, the appropriate action(s) should be taken (remove the content file, repair the broken link, or ignore the orphan, respectively).

Concepts – Acceptance Test

Acceptance testing is the final test action prior to deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by the

end-users to perform those functions and tasks the software was built to do. There are three common strategies for implementing an acceptance test. They are:

Formal acceptance Informal acceptance or alpha test Beta test

The strategy you select is often based on the contractual requirements, organizational and corporate standards, and the application domain.

Formal Acceptance Testing Formal acceptance testing is a highly managed process and is often an extension of the

system test. The tests are planned and designed as carefully and in the same detail as system testing. The activities and artifacts are the same as for system testing. Acceptance testing is completely performed by the end-user organization, or an objective

group of people chosen by the end-user organization.

The benefits of this form of testing are: The functions and features to be tested are known. The details of the tests are known and can be measured. The tests can be automated, which permits regression testing. The progress of the tests can be measured and monitored. The acceptability criteria are known.

The disadvantages include: Requires significant resources and planning. The tests may be a re-implementation of system tests. It may not uncover subjective defects in the software, since you are only looking for defects

you expect to find.

Informal Acceptance Testing In informal acceptance testing, the test procedures for performing the test are not as

rigorously defined as for formal acceptance testing. There are no particular test cases to follow. The individual tester determines what to do.

Page 30: Software Testing Workflow

30

Informal acceptance testing is most frequently performed by the end-user organization.

The benefits of this form of testing are: The functions and features to be tested are known. The progress of the tests can be measured and monitored. The acceptability criteria are known. You will uncover more subjective defects than with formal acceptance testing.

The disadvantages include: Resources, planning, and management resources are required. You have no control over which test cases are used. End users may conform to the way the system works and not see the defects. End users may focus on comparing the new system to a legacy system, rather than looking

for defects. Resources for acceptance testing are not under the control of the project and may be

constricted.

Beta Testing Beta testing is the least controlled of the three acceptance test strategies. In beta testing, the amount of detail, the data, and approach taken is entirely up to the

individual tester. Each tester is responsible for identifying their own criteria for whether to accept the system

in its current state or not. Beta testing is implemented by end users, often with little or no management from the

development (or other non end-user) organization. Beta testing is the most subjective of all the acceptance test strategies.

The benefits of this form of testing are: Testing is implemented by end users. Large volumes of potential test resources. Increases customer satisfaction to those who participate. You will uncover more subjective defects than with formal or informal acceptance testing.

The disadvantages include: Not all functions and / or features may be tested. Test progress is difficult to measure. End users may conform to the way the system works and not see or report the defects. End users may focus on comparing the new system to a legacy system, rather than looking

for defects. Resources for acceptance testing are not under the control of the project and may be

constricted. Acceptability criteria are not known. You need increased support resources to manage the beta testers.

Page 31: Software Testing Workflow

31

Concepts – Test Automation and Tools

Test automation tools are increasingly being brought to the market to automate the activities in test. A number of tools exist, and to date, no single tool is capable of automating all the activities in test. In fact, most tools are specific to one or a few activities, and some are so focused they only address a part of an activity.When evaluating different tools for test automation, it is necessary that you understand the type of tool it is, the limitations of the tool, and what activities the tool addresses and automates. Below are descriptions regarding the classification of test automation tools.

Function Test tools may be categorized by the function they perform. Typical function designations for tools include:

Data acquisition tools that acquire data to be used in the test activities. The data may be acquired through conversion, extraction, transformation, or capture of existing data, or through the generation from use cases or supplemental specifications

Static measurement tools that analyze information contained in the design model(s), source code, or other fixed sources. The analysis yields information on the logic flow, data flow, or quality metrics, such as complexity, maintainability, or lines of code.

Dynamic measurement tools that perform an analysis during the execution of the code. The measurements include the run-time operation of the code, such as memory error detection and performance.

Simulators or Drivers that perform activities that are not, or could not be available for testing purposes, for reasons of timing, expense, or safety.

Test management tools that assist in the planning, design, implementation, execution, evaluation, and management of the test activities or artifacts.

 White-box versus Black-box Test tools are often characterized as either white-box or black-box, based upon the manner in which the tool is used, or the technology / knowledge needed to use the tools.

White-box tools rely upon knowledge of the code, design model(s), or other source material to implement and execute the tests. Black-box tools rely only upon the use cases or functional description of the target-of-test. Where as white-box tools have knowledge of how the target-of-test processes the request, black-box tools rely upon the input and output conditions to evaluate the test.  Specialization of Test Tools

In addition to the broad classifications of tools presented above, tools may also be classified by specialization.

Record/Playback tools combine data acquisition with dynamic measurement. Test data is acquired during the recording of events (during test implementation). Later, during test execution, the data is used to "playback" the test script which is used to evaluate the execution of the target-of-test.

Page 32: Software Testing Workflow

32

Quality metric tools are static measurement tools that perform a static analysis of the design model(s) or source code to establish a set of parameters describing the target-of-test's quality. The parameters may indicate reliability, complexity, maintainability, or other measures of quality.

Coverage monitor tools indicate the completeness of testing by identifying how much of the target-of-test was covered, in some dimension, during testing. Typical classes of coverage include, requirements-based (use cases), logic branch or node (code-based), data state, and function points.

Test case generators automate the generation of test data. Test case generators use either a formal specification of the target-of-test's data inputs or the design model(s) / source code to produce test data that tests the nominal inputs, error inputs, and limit and boundary cases.

Comparator tools compare test results with reference results and identify differences. Comparators differ in their specificity to a particular data formats. For example, comparators may be pixel-based (to compare bitmap images) or object-based (to compare object properties or data).

Data extractors provide inputs for test cases from existing sources. Sources include, databases, data streams in a communication system, reports, or design model(s) / source code.

Page 33: Software Testing Workflow

33

Workflow Detail – Plan Test

Activity Inputs From Resulting ArtifactsPlan Test Use Case Model

Design Model Integration Build Plan

Test Plan

Activity Plan TestPurpose:

To collect and organize test-planning information. To create the test plan.

Steps are of the activity are explained below

Identify requirements for Test

Identifying the requirements for test is the start of the test planning activity. The scope and role of the test effort is identified. Requirements for test are used to determine the overall test effort (for scheduling, test design,

etc.) Requirements for test are used as the basis for test coverage. Items that are to be identified as requirements for test must be verifiable. They must have an observable, measurable outcome. A requirement that is not verifiable is not a requirement for test.The following is performed to identify requirements for test: Reviewing the materialThe most common sources of requirements for test include:

o Existing requirement listso Use caseso Use-case modelso Use-case realizationso Supplemental specificationso Design requirementso Business caseso Interviews with end-userso Review of existing systems.

Indicate the requirements for test. Generation of hierarchy of requirements for test. The hierarchy may be based upon an existing hierarchy, or newly generated. The hierarchy is a logical grouping of the requirements for test. Common methods include grouping the items by

o Use-caseo Business caseo Type of test (functional, performance, etc.) or a combination of these.

Page 34: Software Testing Workflow

34

The output of this step is a report (the hierarchy) identifying those requirements that will be the target of test

Assess Risk

Purpose: To Maximize test effectiveness and prioritize test efforts To establish an acceptable test sequence

To assess risk perform the following:

Identify and justify a risk factorThe most important requirements for test are those that reflect highest risko Risk can be viewed from several perspectives:

o Effect - the impact or consequences use case (requirement, etc.) failing o Cause - identifying an undesirable outcome and determining what use case or

requirement(s), should they fail, would result in the undesirable outcome o Likelihood - the probability of a use case or requirement failing.

Each requirement for test should be reviewed and a risk factor identified (such as high, medium or low)

Identify and justify an operational profile factoro Not only are the highest risk requirements for test tested, but also those that are

frequently used (as these often have the highest end-user visibility).o Identify an operational profile factor for each requirement for test o A statement justifying why a specific factor value was identified.o This is accomplished by reviewing the business case(s) or by conducting

interviews with end-users and their managers

Identify and justify a test priority factoro A test priority factor should be identified and justified.o The test priority factor identifies the relative importance of the test requirement

and the order or sequence in which it will be tested.

Page 35: Software Testing Workflow

35

Develop Test Strategy

Purpose

Identifies and communicates the test techniques and tools Identifies and communicates the evaluation methods for determining product quality and test

completion Communicate to everyone how you will approach the testing and what measures you will use

to determine the completion and success of testing.

The strategy does not have to be detailed, but it should give the reader an indication of how you will test.

Identify and describe the approach to test

The approach to test is a statement (or statements) describing how the testing will be implemented.

For each use case, test cases will be identified and executed, including valid and invalid input data.

Test procedures will be designed and developed for each use case. Test procedures will be implemented to simulate managing customer accounts over a period

of three months. Test procedures will include adding, modifying, and deleting accounts, customers.

Test procedures will be implemented and test scripts executed by 1500 virtual users, each executing functions A, B, and C and each using different input data.

Identify the criteria for test

The criteria for test are objective statements indicating the value(s) used to determine / identify when testing is complete, and the quality of the application-under-test.

The test criteria may be a series of statements or a reference to another document (such as a process guide or test standards).

Test criteria should identify: o what is being tested (the specific target-of-test) o how is the measurement being made o what criteria is being used to evaluate the measurement

Sample test criteria:For each high priority use case:

o All planned test cases and test procedures have been executed. o All identified defects have been addressed. o All planned test cases and test procedures have been re-executed and no new

defects identified.

Page 36: Software Testing Workflow

36

Identify resources

Once it's been identified what's being tested and how, there is the need to identify who will do the testing and what is needed to support the test activities. Identifying resource requirements includes determining what resources are needed, including the following:

Human resources (number of persons and skills) Test environment (includes hardware and software) Tools Data

Create Schedule

Purpose:To Identify and communicate test effort, schedule, and milestones Estimate Test Efforts

The following assumptions should be considered when estimating the test effort: productivity and skill / knowledge level of the human resources working on the project (such

as their ability to use test tools or program) parameters about the application to be built (such as number of windows, components, data

entities and relationships, and the percent of re-use) test coverage (the acceptable depth for which testing will be implemented and executed.) It is

not the same to state each use case / requirement, was tested if only one test case will be implemented and executed (per use case / requirement). Often many test cases are required to acceptably test a use case / requirement.

Testing effort needs to include time for regression test. The following table shows how regression test cases can accumulate over several iterations for the different testing stages.

Iterations vs. stages

System Integration Unit

First iteration Test of this iteration's test casesthat target the system

Test of this iteration's test casesthat target builds

Test of this iteration's test casesthat target units

Followingiterations

Test of this iteration's test cases,as well as test cases fromprevious iterations that have been

Test of this iteration's test cases, aswell as test cases from previousiterations that have been designedfor regression testing.

Test of this iteration's test cases,as well as test cases fromprevious iterations that have beendesigned for regression testing.

Page 37: Software Testing Workflow

37

designed for regression testing.

Generate Test Schedule

A test project schedule can be built from the work estimates and resource assignments. In the iterative development environment, a separate test project schedule is needed for each

iteration. All test activities are repeated in every iteration.

Early iterations introduce a larger number of new functions and new tests. As the integration process continues the number of new tests diminish, and a growing

number of regression tests need to be executed to validate the accumulated functions. Consequently, the early iterations require more work on test planning and design while the

later iterations are weighted towards test execution and evaluation.

Generate Test Plan

Purpose:

To organize and communicate to others the test-planning information

To generate a test plan, perform the following:

Review / refine existing materialsPrior to generating the test plan, a review of all the existing project information should be done to ensure the test plan contains the most current and accurate information. If necessary, test related information (requirements for test, test strategies, resources, etc.) should be revised to reflect any changes.

Identify test deliverablesThe purpose of the test deliverables section is to identify and define how the test artifacts will be created, maintained, and made available to others. These artifacts include:

o Test Model o Test Cases o Test Procedures o Test Scripts o Change Request

Generate the test plan

Page 38: Software Testing Workflow

38

The last step in the Plan Test activity is to generate the test plan. This is accomplished by assembling all the test information gathered and generated into a single report.The test plan should be distributed to at least the following:

o All test workers o Developer representative o Share holder representative o Stakeholder representative o Client representative o End-user representative

Page 39: Software Testing Workflow

39

How to Use Test Manager in Activity – Plan TestIn the Plan Test activity, you identified what you are going to test. That is, you identified the test requirements - the use cases, functions, features, and characteristics of the application that you will implement and execute tests against. Entering these test requirements into TestManager will enable you to automatically generate Test Coverage reports and track your progress in subsequent test activities. Create a Project Using Rational Administrator Creating project using existing Req. Pro Project Creating a project by creating new Req. Project Starting and Selecting a project in Test Manager Insert a requirement Insert a child requirement Edit requirement properties Delete a requirement. Using the Attributes tab page in properties of requirement – to set the priority (High,

Medium, Low)

Page 40: Software Testing Workflow

40

Workflow Detail – Design Test

Purpose:

To identify a set of verifiable test cases for each build. To identify test procedures that show how the test cases will be realized.

Activity Inputs From Resulting ArtifactsDesign Test Design Guidelines

Use Cases Supplementary Specs Implemented Component

Test Cases Test Procedures Workload Analysis

Document

Activity – Design TestWorkload Analysis Document (Performance Testing Only)

Purpose To identify and describe the different variables that affect system use and performance To identify the sub-set of use cases to be used for performance testing Workload Analysis is performed to generate a workload analysis document that can be implemented for performance testing. The primary inputs to a workload analysis document include:

Software development plan Use Case Model Design Model Supplemental Specifications

Workload analysis includes the following: 1. Clarify the objectives of performance testing and the use cases. 2. Identify the use cases to be implemented in the model. 3. Identify the actors and actor characteristics to be simulated / emulated in the performance

tests. 4. Identify the workload to be simulated / emulated in the performance tests (in terms of

number of actors / actor classes and actor profiles). 5. Determine the performance measures and criteria. 6. Review the use cases to be implemented and identify the execution frequency. 7. Select the most frequently invoked use cases and those that generate the greatest load on

the system. 8. Generate test cases for each of the use cases identified in the previous step. 9. Identify the critical measurement points for each test case.

Identify And Describe Test Cases

To identify and describe the test conditions to be used for testing

Page 41: Software Testing Workflow

41

To identify the specific data necessary for testing To identify the expected results of test

For each requirement for test (identified in plan test – activity in previous workflow detail)Analyze Application Workflow The purpose of this step is to identify and describe the actions and / or steps of the actor

when interacting with the system. These test procedure descriptions are then used to identify and describe the test cases

necessary to test the application. These early test procedure descriptions should be high-level, that is, the actions should be

described as generic as possible without specific references to actual components or objects.For each use case or requirement,

review the use case flow of events, or walk through and describing the actions / steps the actor takes when interacting with the

system The purpose of this step is to establish what test cases are appropriate for the testing of each requirement for test.Note: If testing of a previous version has already been implemented, there will be existing test cases. These test cases should be reviewed for use and design as for regression testing. Regression test cases should be included in the current iteration and combined with the new test cases that address new behavior.Identify and describe test casesThe primary input for identifying test cases is:

The use cases that, at some point, traverse your target-of-test (system, subsystem or component).

The design model. Any technical or supplemental requirements. Target-of-test application map (as generated by an automated test script generation tool). Describe the test cases by stating: The test condition (or object or application state) being tested. The use case, use-case scenario, or technical or supplemental requirement the test cases is

derived from. The expected result in terms of the output state, condition, or data value(s).

Identify test case data

Using the matrix created above, review the test cases and identify the actual values that support the test cases. Data for three purposes will be identified during this step:

Data values used as input Data values for the expected results Data needed to support the test case, but is neither used as input or output for a specific test

case

Page 42: Software Testing Workflow

42

Identify and Structure Test ProceduresPurpose

To analyze use case workflows and test cases to identify test procedures To identify, in the test model, the relationship(s) between test cases and test procedures

creating the test model Using Rational Test ManagerIn the Plan Test activity, you identified the test requirements, that is, what needs to be tested. In test design, you decide how test requirements will be tested. This leads to formal test procedures that identify test set-up, execution instructions, and evaluation methods. You use this information as the specifications for recording or programming your test scripts. Then, with TestManager you link this information to a test script and generate test coverage reports that track your test design progress.

1. Plan script 2. Edit script properties

Review application workflows or application map

Review the application workflow(s), and the previously described test procedures to determine if any changes have been made to the use case workflow that affects the identification and structuring of test procedures.If utilizing an automated test script generation tool, review the generated application map (used to generate the test scripts) to ensure the hierarchical list of UI objects representing the controls in the user interface of the target-of-test are correct and relevant to your test and/or the use cases being tested.The reviews are done in a similar fashion as the analysis done previously:

Review the use case flow of events, and Review of the described test procedures, and Walk through the steps the actor takes when interacting with the system, and/or Review the application map

Develop the test model

The purpose of the test model is to communicate what will be tested, how it will be tested, and how the tests will be implemented. For each described test procedure (or application map and generated test scripts), the following is done to create the test model:

Identify the relationship or sequence of the test procedure to other test procedures (or the generated test scripts to each other).

Identify the start condition or state and the end condition or state for the test procedure Indicate the test cases to be executed by the test procedure (or generated test scripts). The following should be considered while developing the test model: Many test cases are variants of one another, which might mean that they can be satisfied by

the same test procedure. Many test cases may require overlapping behavior to be executed. To be able to reuse the

implementation of such behavior, you can choose to structure you test procedures so that one test procedure can be used for several test cases.

Page 43: Software Testing Workflow

43

Many test procedures may include actions or steps that are common to many test cases or other test procedures. In these instances, it should be determined if a separate structured test procedure (for those common steps) should be created, while the test case specific steps remain in a separate structured test procedure.

When using an automated test script generation tool, review the application map and generated test scripts to ensure the following is reflected in the test model:

o The appropriate/desired controls are included in the application map and test scripts.

o The controls are exercised in the desired order. o Test cases are identified for those controls requiring test data. o The windows or dialog boxes in which the controls are displayed.

Structure test procedures

The previously described test procedures are insufficient for the implementation and execution of test. Proper structuring of the test procedures includes revising and modifying the described test procedures to include, at a minimum, the following information:

Set-up: how to create the condition(s) for the test case(s) that is (are) being tested and what data is needed (either as input or within the test database).

Starting condition, state, or action for the structured test procedure. Instructions for execution: the detailed steps / actions taken by the tester to implement and

execute the tests (to the degree of stating the object or component). Data values entered (or referenced test case). Expected result (condition or data, or referenced test case) for each action / step. Evaluation of results: the method and steps used to analyze the actual results obtained

comparing them with the expected results. Ending condition, state, or action for the structured test procedure. Note: a described test procedure, when structured may become several structured test procedures, which must be executed in sequence. This is done to maximize reuse and minimize test procedure maintenance.Test procedures can be manually executed or implemented as test scripts (for automated execution). When a test procedure is automated, the resulting computer-readable file is known as a test script.

Page 44: Software Testing Workflow

44

Review and Assess Test Coverage

Purpose:To identify and describe the measures of test that will be used to identify the completeness of testing

Identify test coverage measures

There are two methods of determining test coverage: Requirements based coverage. Code based coverage. Both identify the percentage of the total testable items that will be (or have been) tested, but they are collected or calculated differently.

Requirements based coverage is based upon using use cases, requirements, use case flows, or test conditions as the measure of total test items and can be used during test design.

Code based coverage uses the code generated as the total test item and measures a characteristic of the code that has been executed during testing (such as lines of code executed or the number of branches traversed). This type of coverage measurement can only be implemented after the code has been generated.

Identify the method to be used and state how the measurement will be collected, how the data should be interpreted, and how the metric will be used in the process.

Generate and distribute test coverage reports

Identified in the test plan is the schedule of when test coverage reports are generated and distributed. These reports should be distributed to, at least, the following workers:

All test workers Developer representative Share holder representative Stakeholder representative

Page 45: Software Testing Workflow

45

Workflow detail - Implement Test

Activity Inputs From Resulting ArtifactsImplement Test Implemented Component

Test Cases Test Scripts Test Cases (Updated) Test Procedures (Updated)

Design Test Packages and Classes

Updated Test Cases Design Model Updated Test Procedures

Test Packages and Test Classes

Implement Test Subsystems and Components

Test Packages and Test Classes

Component Build

Test Subsystems and Test Components

Activity – Implement Test To create or generate reusable test scripts To maintain traceability of the test implementation artifacts back to the associated test cases

and use cases or requirements for test

Steps

Record, generate, or program test scripts Identify test-specific functionality in the design and implementation models Establish external data sets

Record, Generate, or program test scripts

Purpose:To create or automatically generate the appropriate test scripts which implement (and execute) the test cases and test procedures as desired

Create, generate or acquire test ScriptsFor each structured test procedure in the test model at least one test script is created or generated.The following considerations should be addressed when creating, generating, or acquiring test scripts:

1. Maximize test script reuse 2. Minimize test script maintenance 3. Use existing scripts when feasible 4. Use test tools to create test scripts instead of programming them (when feasible) 5. Refer to application gui objects and actions in the method that is most stable (such as by

object name or using mouse clicks) The following steps are performed to create, generate, or acquire test scripts:

1. Review existing test scripts for potential use 2. Set-up the test environment (including all hardware, software, tools, data, and application

build)

Page 46: Software Testing Workflow

46

3. Initialize the environment (to ensure the environment is in the proper state or condition for the test)

4. Create or acquire the test scripts: 5. Record / capture: for each structured test procedure, execute the test procedure to create a

new test script by following the steps / actions identified in the structured test procedure and using the appropriate recording techniques (to maximize reuse and minimize maintenance)

6. Modifying existing scripts: edit the existing manually, or delete the non-required instructions and re-record the new instructions using the recording description above

7. Programming: for each structured test procedure, generate the instructions using the appropriate programming techniques

8. Continue to create, generate, or acquire test scripts until the desired / required test scripts have been created

9. Modify the test scripts as necessary (as defined in the test model)

Test / debug test scripts

Upon the completion of creating, generating, or acquiring test scripts, they should be tested / debugged to ensure the test scripts implement the tests appropriately and execute properly. This step should be performed using the same version of the software build used to create / acquire the test scripts.The following steps are performed to test / debug test scripts:

1. Set-up the test environment (if necessary) 2. Re-initialize the environment 3. Execute the test scripts 4. Evaluate Results 5. Determine appropriate next action:

a. Results as expected / desired: no actions necessary b. Unexpected results: determine cause of problem and resolve

Review and Evaluate Test Coverage

Upon the completion of creating, generating, or acquiring test scripts, a test coverage reports should be generated to verify that the test scripts have achieved the desired test coverage.

Identify test-specific functionality in the design and implementation models

Purpose:To specify the requirements for software functions needed to support the implementation or execution of testingIdentify the test-specific functionality that should be included in the design model and in the implementation model. The most common use of test specific functionality is during integration test where there is the need to provide stubs or drivers for components or systems that are not yet included or implemented.There are two styles:

Page 47: Software Testing Workflow

47

Stubs and drivers that are simply "dummies" with no functionality other than being able to enter a specific value (or values) or return a pre-defined value (or values).

Stubs and drivers that are more intelligent and can "simulate" more complex behavior. Use the second style prudently because it takes more resources to implement. A balance between value added (by creating a complex stub / driver) and the effort necessary to implement and test the stub / driver is necessary.

Establish External Data Sets

Purpose:To create and maintain data, stored externally to the test scripts, that are used by the test scripts during test execution

External data sets provide value to test in the following ways: Data is external to the test script eliminating hard-coded references in the test script External data can be modified easily with little or no test script impact Additional test cases can easily be added to the test data with little or no test script

modifications External data can be shared with many test scripts External data sets can contain data values used to control the test scripts (conditional

branching logic

Activity Design Test Packages and ClassesPurpose: To design test-specific functionality

Steps

Identify Test-Specific Packages and Classes Design Interface to Automated Test Tool Design Test Procedure Behavior

Identify Test-Specific Packages and Classes

Purpose: To identify and design the classes and packages that will provide the needed test specific functionality.

Based on input from the test designer identify and specify test-specific classes and packages in the design model.A driver or stub of a design class has the same methods as the original class, but there is no behavior defined for the methods other than to provide for input (to the target for test) or returning a pre-defined value (to the target for test).A driver or stub of a design package contains simulated classes for the classes that form the public interface of the original package.

Design Interface to Automated Test Tool

Page 48: Software Testing Workflow

48

Purpose: To identify the interface necessary for the integration of an automated test tool with test-specific functionality.Identify what behavior is needed to make your test automation tool communicate with your target for test in an efficient way. Identify and describe the appropriate design classes and packages.

Design Test Procedure Behavior

Purpose: To automate test procedures for which there is no automated test tool available.To automate test procedures for which there is no automation tool, identify the appropriate design classes and packages. Use the test cases and the use cases they derive from as input.

Activity Implement Test Components and SubsystemsPurpose: To implement test-specific functionality

Steps

Implement and Unit Test Drivers / Stubs Implement and Unit Test Interface to Automated Test Tool(s) Implement and Unit Test Test-Procedure Behavior

Workflow Detail – Execute Test In Integration Test Stage

Activity Inputs From Resulting ArtifactsExecute Test Test Scripts

BuildsTest ResultsDefects

Activity Execute Test

Purpose: To execute tests and capture test results.

Steps

Execute Test Procedures Evaluate Execution of Test Verify Test Results Recover From Halted Tests Execute Test Procedures

Set-up the test environment to ensure that all the needed components (hardware, software, tools, data, etc.) have been implemented and are in the test environment.

Initialize the test environment to ensure all components are in the correct initial state for the start of testing.

Execute the test procedures.

Page 49: Software Testing Workflow

49

o Note: executing the test procedures will vary dependent upon whether testing is automated or manual.

Automated testing: The test scripts created during the Implement Test activity are executed.

Manual execution: The structured test procedures developed during the Design Test activity are used to manually execute test.

Evaluate Execution of Test

Purpose: To determine whether testing executed to completion or halted To determine if corrective action is required The execution of testing ends or terminates in one of two conditions:  Normal: all the test procedures (or scripts) execute as intended and to completion.  If testing terminates normally, then continue with Verify Test Results: Abnormal or premature: the test procedures (or scripts) did not execute completely or as

intended. When testing ends abnormally, the test results may be unreliable. The cause of the abnormal / premature termination needs to be identified, corrected, and the tests re-executed before any additional test activities are performed.  

If testing terminates abnormally, continue with Recover From Halted Tests.

Verify Test Results

Purpose: To determine if the test results are reliable To identify appropriate corrective action the test results indicate flaws in the test effort or

artifacts Upon the completion of testing, the test results should be reviewed to ensure that the test results are reliable and reported failures, warnings, or unexpected results were not caused by external influences (to the target-of-test), such as improper set-up or data.  The most common failures reported when test procedures and test scripts execute completely, and their corrective actions are given below:

Test verification failures - this occurs when the actual result and the expected result do not match. Verify that the verification method(s) used focus only on the essential items and / or properties and modify if necessary.

Unexpected GUI windows - this occurs for several reasons. The most common is when a GUI window other than the expected one is active or the number of displayed GUI windows is greater than expected. Ensure that the test environment has been set-up and initialized as intended for proper test execution.

Missing GUI windows - this failure is noted when a GUI window is expected to be available (but not necessarily active) and is not. Ensure that the test environment has been set-up and initialized as intended for proper test execution. Verify that the actual missing windows are / were removed from the target-of-test.

If the reported failures are due to errors identified in the test artifacts, or due to problems with the test environment, the appropriate corrective action should be taken and the testing re-executed.  For additional information, see "Recover From Halted Tests" below.

Page 50: Software Testing Workflow

50

If the test results indicate the failures are genuinely due to the target-of-test, then the Execute Test Activity is complete.

Recover from Halted Tests

Purpose: To determine the appropriate corrective action to recover from a halted test To correct the problem, recover, and re-execute the tests There are two major types of halted tests:

Fatal errors - the system fails (network failures, hardware crashes, etc.) Test Script Command Failures - specific to automated testing, this is when a test script cannot

execute a command (or line of code). Both types of abnormal termination to testing may exhibit the same symptoms:

Many unexpected actions, windows, or events occur while the test script is executing Test environment appears unresponsive or in an undesirable state (such as hung or crashed). To recover from halted tests, do the following:

Determine the actual cause of the problem Correct the problem Re-set-up test environment Re-initialize test environment Re-execute tests

Workflow Detail – Execute Test in System Test Stage

Purpose:The purpose of the System Test Stage is to ensure that the complete system functions as intended. The system integrator compiles and links the system in increments. Each increment needs to go through testing of the functionality that has been added, as well as all tests the previous builds went through (regression tests).Within iteration, you will execute system testing several times until the whole system (as defined by the goal of the iteration) has functions as intended and meets the test's success or completion criteria. The output artifacts for this activity are the test results.Test execution should be done under a controlled environment.This includes:

A test system that is isolated from non-test influences The ability to set-up a known initial state for the test system(s) and return to this state upon

the completion of testing (to re-execute tests)