98
SOFTWARE TESTING By Mousmi Pawar

Software Testing

Embed Size (px)

DESCRIPTION

Software Testing

Citation preview

Page 1: Software Testing

SOFTWARE TESTINGBy Mousmi Pawar

Page 2: Software Testing

TESTING… Software testing, when done correctly, can

increase overall software quality of conformance by testing that the product conforms to its requirements.

Testing begins with the software engineer in early stages, but later specialists may be involved in the testing process.

After generating source code, the software must be tested to uncover and correct as many errors as possible before delivering it to the customer.

It’s important to test software because a customer, after running it many times would finally uncover an error.

Page 3: Software Testing

INTRODUCTION TO QUALITY ASSURANCE

Quality assurance is an essential activity for any business that produces products to be used by others.

software quality assurance is a "planned and systematic pattern of actions" that are required to ensure high quality in software.

Software quality assurance is composed of a variety of tasks associated with two different constituencies—the software engineers who do technical work and an SQA group that has responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.

Page 4: Software Testing

SIX SIGMA..

Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects.

The Six Sigma approach is: Define: Determine the project goals and the

requirements of customers. Measure the current performance of the process. Analyze and determine the root causes of the

defects. Improve the process to eliminate defects. Control the performance of the process.

Page 5: Software Testing

SOFTWARE TESTING FUNDAMENTALS

During software engineering activities, an engineer attempts to build software, where as during testing activities, the intent is to demolish the software.

Hence testing is psychologically destructive rather constructive.

A good and successful test is one that uncovers an undiscovered error.

All the test that are carried out should be traceable to customer requirements at the sometime exhaustive testing is not possible.

Page 6: Software Testing

TESTING OBJECTIVES

Testing is a process of executing a program with the intent of finding an error.

A good test case is one that has a high probability of finding an as yet undiscovered error.

A successful test is one that uncovers an as yet undiscovered error.

Page 7: Software Testing

COMMON TERMS: Error:- difference between the actual output

of a software and the correct output. A mistake made by programmer. OR Anything that

stops the execution of application. Fault: is a condition that cause a system to

fail in performing its required function. Bug:  A Bug is a Application Error, in Software

doesn't affect the whole application in terms of running. It doesn't stop the execution of application. It produces wrong output.

Failure:  occurs when a fault is executed. The inability of a system or component to perform its required functions within specified performance requirements.

Page 8: Software Testing

TESTING PRINCIPLES

“All tests should be traceable to customer requirements.” For a customer, the most severe defect are the one that causes the program to fail to meet the requirements.

“Tests should be planned long before testing begins.” All test can be planned and designed before any code has been generated

“Testing should begin ‘in the small’ and progress toward testing in the large.” The first test planned and executed generally focus on individual component.

Page 9: Software Testing

TESTING PRINCIPLES

“As testing progresses focus shifts on integrated clusters of components and then the entire system.”

“Exhaustive testing is not possible.” to test exhaustively, even a small program could take years.

“To be more effective, testing should be conducted by an independent third party.”. Software engineer who constricts the

software is not always the best person to test it. So a third party deems to be more efficient.

Page 10: Software Testing

SOFTWARE TESTING

There are two major types of software testing Black box testing White box testing/ Glass box testing.

Black box testing focuses on input, output, and principle function of a software module.

Glass box testing looks into the structural testing or glass box testing, statements paths and branches are checked for correct execution.

Page 11: Software Testing

LEVELS OF SOFTWARE TESTING

Software Testing is carried out at different levels through the entire software development life cycle.

Page 12: Software Testing

In unit testing, test cases verify the implementation of design of a each software element.

Integration testing traces detailed design and architecture of the system. Hardware and software elements are combined and tested until the entire system has been integrated.

Functional Test It is used to exercise the code with the common input values for which the expected values are available.

In system testing, it checks for system requirement.

In acceptance testing, it determines if test results satisfy acceptance criteria of project stake holders.

Page 13: Software Testing

UNIT TESTING

Unit testing focuses verification effort on the smallest unit of software design—the software component or module.

Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module.

The relative complexity of tests and uncovered errors is limited by the constrained scope established for unit testing.

The unit test is white-box oriented, and the step can be conducted in parallel for multiple components.

Page 14: Software Testing

Unit Test Considerations The module interface is tested to ensure that information properly

flows into and out of the program unit under test. The local data structure is examined to ensure that data stored

temporarily maintains its integrity during all steps in an algorithm's execution.

Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing.

All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once.

And finally, all error handling paths are tested. Following are some types of errors commonly found during unit

testing misunderstood or incorrect arithmetic precedence mixed mode operations, incorrect initialization, precision inaccuracy,

Page 15: Software Testing

Unit Testing Precedence Once the source level code has been developed, reviewed, and

verified for correspondence to component level design, unit test case design begins.

a component is not a stand-alone program, driver and/or stub software must be developed for each unit test.

In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results.

Stubs serve to replace modules that are subordinate (called by) the component to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing.

Drivers and stubs represent overhead. That is, both are software that must be written (formal design is not commonly applied) but that is not delivered with the final software product.

Page 16: Software Testing
Page 17: Software Testing

Integration Testing

Unit that otherwise seem to be working fine individually, starts causing problems when integrated together.

Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; sub-functions, when combined, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems.

Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing.

Page 18: Software Testing

Non-Incremental integration or "big bang" approach: Combines all the components at the same time and

then tests the program on the whole. This approach gives a number of errors that are difficult to isolate and trace as the whole program now is very vast. The correction of such errors seems to go into infinite loop.

Incremental integration: constructed and tested in small increments, where

errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied.

Top-down Integration Bottom-up Integration

Page 19: Software Testing

Top-down Integration Top-down integration testing is an incremental

approach to construction of program structure. Modules are integrated by moving downward

through the control hierarchy, beginning with the main control module (main program).

Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.

Page 20: Software Testing

Depth-first integration & Breadth-first integration

depth-first integration would integrate all components on a major control path of the structure.

Selection of a major path is somewhat arbitrary and depends on application-specific characteristics.

For example, selecting the left-hand path, components M1, M2 , M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated.

Then, the central and right-hand control paths are built. Breadth-first integration incorporates all components directly

subordinate at each level, moving across the structure horizontally.

components M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.

Page 21: Software Testing

The integration process is performed in a series of five steps: 1. The main control module is used as a test driver and

stubs are substituted for all components directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components.

3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real component.

5. Regression testing may be conducted to ensure that new errors have not been introduced.

The process continues from step 2 until the entire program structure is built.

Page 22: Software Testing

Bottom-up Integration Bottom-up integration testing, as its name implies, begins

construction and testing with atomic modules i.e., components at the lowest levels in the program structure.

Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps: 1. Low-level components are combined into clusters

(sometimes called builds) that perform a specific software subfunction.

2. A driver (a control program for testing) is written to coordinate test case input and output.

3. The cluster is tested. 4. Drivers are removed and clusters are combined moving

upward in the program structure.

Page 23: Software Testing

Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate to Ma.

Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma.

Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc, and so forth.

Page 24: Software Testing

Regression Testing

Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked.

These changes may cause problems with functions that previously worked flawlessly.

In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.

Page 25: Software Testing

The regression test suite (the subset of tests to be executed) contains three different classes of test cases: A representative sample of tests that will exercise all

software functions. Additional tests that focus on software functions that are

likely to be affected by the change. Tests that focus on the software components that have been

changed. As integration testing proceeds, the number of regression

tests can grow quite large. Therefore, the regression test suite should be designed to

include only those tests that address one or more classes of errors in each of the major program functions.

It is impractical and inefficient to re-execute every test for every program function once a change has occurred.

Page 26: Software Testing

Smoke Testing Smoke testing is an integration testing approach that is

commonly used when “shrinkwrapped” software products are being developed. It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess its project on a frequent basis.

the smoke testing approach encompasses the following activities: 1. Software components that have been translated into code are

integrated into a “build.” A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule.

3. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up.

Page 27: Software Testing

Smoke testing provides a number of benefits when it is applied on complex, timecritical software engineering projects: Integration risk is minimized. Because smoke tests are

conducted daily, incompatibilities and other show-stopper errors are uncovered early, thereby reducing the likelihood of serious schedule impact when errors are uncovered.

The quality of the end-product is improved. Because the approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component-level design defects. If these defects are corrected early, better product quality will result.

Error diagnosis and correction are simplified. Like all integration testing approaches, errors uncovered during smoke testing are likely to be associated with “new software increments”—that is, the software that has just been added to the build(s) is a probable cause of a newly discovered error.

Progress is easier to assess. With each passing day, more of the software has been integrated and more has been demonstrated to work. This improves team morale and gives managers a good indication that progress is being made.

Page 28: Software Testing

Validation testing /Acceptance Testing After integration testing, one may start with

validation testing. validation succeeds when software functions in a

manner that can be reasonably expected by the customer.

Software validation is achieved through a series of black-box tests that demonstrate conformity with requirements.

This is to ensure that all functional requirements are satisfied, all behavioral characteristics are achieved, all performance requirements are attained, documentation is correct, and other requirements are met

Page 29: Software Testing

It is virtually impossible for a software developer to foresee how the customer will really use a program.

When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements.

Conducted by the end-user rather than software engineers, an acceptance test can range from an informal "test drive" to a planned and systematically executed series of tests.

Page 30: Software Testing

Alpha and Beta Testing The alpha test is conducted at the developer's site by a

customer. The software is used in a natural setting with the developer

"looking over the shoulder" of the user and recording errors and usage problems.

Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by

the end-user of the software. Unlike alpha testing, the developer is generally not present.

Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer.

The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals.

As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the software product to the entire customer base.

Page 31: Software Testing

SYSTEM TESTING

software is incorporated with other system elements e.g., hardware, people, information, and a series of system integration and validation tests are conducted.

System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Recovery Testing Security Testing Stress Testing Performance Testing

Page 32: Software Testing

Recovery Testing

Many computer based systems must recover from faults and resume processing within a pre-specified time.

Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed.

If recovery is: Automatic recovery, reinitialization, Checkpointing data recovery, and restart are evaluated for correctness.

Page 33: Software Testing

Security Testing Security testing attempts to verify that protection

mechanisms built into a system will, in fact, protect it from improper penetration.

During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.

Anything goes! The tester may attempt to acquire passwords through external clerical means; may attack the system with custom software designed to breakdown any defenses that have been constructed; may overwhelm the system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry.

Page 34: Software Testing

Stress Testing Stress testing executes a system in a manner that demands

resources in abnormal quantity, frequency, or volume. (1) special tests may be designed that generate ten interrupts

per second, when one or two is the average rate (2) input data rates may be increased by an order of

magnitude to determine how input functions will respond, (3) test cases that require maximum memory or other

resources are executed, (4) test cases that may cause thrashing in a virtual operating

system are designed, (5) test cases that may cause excessive hunting for disk-

resident data are created. Essentially, the tester attempts to break the program.

variation of stress testing is a technique called sensitivity testing.

Page 35: Software Testing

Performance Testing

For real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable

Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process.

Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation.

Page 36: Software Testing

WHITE BOX TESTING Focus: Thoroughness (Coverage). Every

statement in the component is executed at least once.

White box testing is also known as structural testing or code-based testing.

The major objective of white box testing is to focus in internal program structure, and discover all internal program errors.

Page 37: Software Testing

ADVANTAGE OF WHITE BOX TESTING

Helps optimizing the code. Helps removing extra line of code. All possible code pathways can be tested

including error handling, resource dependencies, & additional internal code logic/flow.

Enables tester to find programming errors quickly.

Good quality of coding work and coding standards.

Page 38: Software Testing

DISADVANTAGE OF WHITE BOX TESTING

Knowledge of code & internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increase the cost.

Impossible to look into every bit of code to find out hidden errors.

Very expensive technique. Requires knowledge of target system, testing

tools and coding language. Requires specialized tools such as source

code analyzer, debugger, and fault injectors.

Page 39: Software Testing

TRADITIONAL WHITE BOX TESTING

White box testing is further divided into 2 types

BASIS PATH TESTINGFlow graphCyclomatic complexityGraph matrix

CONTROL STRUCTURECondition testingData flow testingLoop testing

Page 40: Software Testing

BASIS PATH TESTING

It is a white box testing technique that enables the designer to derive a logical complexity measure of a procedural design.

Test cases based on Basis path testing grantee to execute every statement in program at least once.

Page 41: Software Testing

FLOW GRAPH

A flow graph depicts the logical control flow using the following notations

Every structural construct has a corresponding flow graph symbol.

Page 42: Software Testing

FLOW GRAPH

Each circle, called flow graph node, represents one or more procedural statements.

The arrows called as links or edged represent flow of control.

Areas bounded by edges are called regions. while counting regions the area outside the graph is also taken as a region.

Each node containing a condition is called a predicate node and has 2 or more edges out of it.

Page 43: Software Testing
Page 44: Software Testing
Page 45: Software Testing

CYCLOMATIC COMPLEXITY

Cyclomatic complexity is software metric that provides a quantitative measure of the logical complexity of a program.

 cyclomatic complexity defines the number of independent paths in the basis set of a program.

This number gives us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.

An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.

Page 46: Software Testing

For example, a set of independent paths for the flow graph illustrated in Figure

path 1: 1-11path 2: 1-2-3-4-5-10-1-11path 3: 1-2-3-6-8-9-10-1-11path 4: 1-2-3-6-7-9-10-1-11

Note that each new path introduces a new edge.

Paths 1, 2, 3, and 4 constitute a “basis set” for the flow graph

Basis set is a set where every statement in the program is visited at least once. This set is not unique.

Page 47: Software Testing

Computing the cyclomatic would tell us how many paths are required in the basis set.

Complexity is computed in one of three ways: The number of regions of the flow graph

correspond to the cyclomatic complexity. Cyclomatic complexity, V(G), for a flow graph, G, is

defined as V(G) = E - N + 2 where E is the number of flow graph edges, N is the number of flow graph nodes.

Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G.

the cyclomatic complexity can be computed using each of the algorithms just noted: The flow graph has four regions. V(G) = 11 edges - 9 nodes + 2 = 4. V(G) = 3 predicate nodes + 1 = 4.

Page 48: Software Testing

GRAPH MATRICES

To develop a software tool that assists in basis path testing, a data structure, called a graph matrix, can be quite useful.

A graph matrix is a square matrix whose size (i.e., number of rows and columns)is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections (an edge) between nodes.

The graph matrix is nothing more than a tabular representation of a flow graph.

Page 49: Software Testing
Page 50: Software Testing

each row with two or more entries represents a predicate node.

Therefore, performing the arithmetic shown to the right of the connection matrix provides us with still another method for determining cyclomatic complexity.

Page 51: Software Testing

CONTROL STRUCTURE TESTING

Although basis path testing is simple and highly effective, it is not sufficient in itself.

other variations on control structure improve quality of white-box testing.

Control Structure testing controls over the order in which the instructions in program are executed.

One of the major objective of control structure testing includes the selection of the test case to meet various criteria for covering the code.

Test cased are derived to exercise control over the order of the execution of the code.

Page 52: Software Testing

Coverage based testing works by choosing test cases according to well defined coverage criteria.

The more common coverage criteria are the following:

Statement Coverage or Node Coverage: Every statement of the program should be exercised at

least once. Branch Coverage or Decision Coverage:

Every possible alternative in a branch of the program should be exercised at least once.

Condition Coverage Each condition in a branch is made to evaluate to true

and false at least once. Decision/Condition Coverage

Each condition in a branch is made to evaluate to both true and false and each branch is made to evaluate to both true and false.

Page 53: Software Testing

Conditional Testing In this kind of testing various test cases are

designed so that the logical conditions contained in a program can be exercised.

A simple condition can be defined as a single Boolean variable or a relational expression.

The conditional testing method tries to focus on testing each and every condition which is existing in a program and gives a path to execute.

Adv Measurement of test coverage of a condition is

simple The test coverage of condition in a program makes

guidance for the generation of extra tests for the program

Page 54: Software Testing

Data Flow testing

Testing in which test cases are designed based on variable usage within the code.

Data-flow testing looks at the lifecycle of a particular variable.

The two major types of usage node are: P-use: predicate use – the variable is used when

making a decision (e.g. if b > 6). C-use: computation use – the variable is used in

a computation (for example, b = 3 + d – with respect to the variable d).

Page 55: Software Testing

Loop Testing Four different classes of loops exist

Simple loops Nested loops Concatenated loops Unstructured loops

Page 56: Software Testing

Black Box Testing Black-box testing, also called behavioral testing,

focuses on the functional requirements of the software.

That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program.

Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a different class of errors than white-box methods.

Page 57: Software Testing

Black Box Testing

Black-box testing attempts to find errors in the following categories: incorrect or missing functions, interface errors, errors in data structures or external data base access, behavior or performance errors, and initialization and termination errors.

Stages of Black Box Testing Equivalence Partitioning Boundary Value Analysis Robustness Testing

Page 58: Software Testing

Advantage’s of Black Box Testing

 More effective on larger units of code than glass box testing

Tester needs no knowledge of implementation, including specific programminglanguages

Tester and programmer are independent of each other

Tests are done from a user's point of view Will help to expose any ambiguities or

inconsistencies in the specifications Test cases can be designed as soon as the

specifications are complete

Page 59: Software Testing

Disadvantages of Black Box Testing

 Only a small number of possible inputs can actually be tested, to test everypossible input stream would take nearly forever

Without clear and concise specifications, test cases are hard to design

There may be unnecessary repetition of test inputs if the tester is not informed oftest cases the programmer has already tried

Page 60: Software Testing

Equivalence partitioning

Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

Test case design is based on an evaluation of equivalence classes for an input condition

An equivalence class represents a set of valid or invalid states for input conditions

From each equivalence class, test cases are selected so that the largest number of attributes of an equivalence class are exercise at once.

Page 61: Software Testing

If an input condition specifies a range, one valid and two invalid equivalence classes are defined Input range: 1 – 10 Eq classes: {1..10}, {x < 1}, {x > 10}

If an input condition requires a specific value, one valid and two invalid equivalence classes are defined Input value: 250 Eq classes: {250}, {x < 250}, {x >

250} If an input condition specifies a member of a set, one

valid and one invalid equivalence class are defined Input set: {-2.5, 7.3, 8.4} Eq classes: {-2.5, 7.3, 8.4},

{any other x} If an input condition is a Boolean value, one valid and

one invalid class are define Input: {true condition} Eq classes: {true condition}, {false

condition}(e.g from Pressman)

Page 62: Software Testing

Boundary Value Analysis

A greater number of errors occur at the boundaries of the input domain rather than in the "center"

Boundary value analysis leads to a selection of test cases that exercise bounding values.

Boundary value analysis is a test case design method that complements equivalence partitioning It selects test cases at the edges of a class It derives test cases from both the input domain

and output domain

Page 63: Software Testing

Guidelines for Boundary Value Analysis

1. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b as well as values just above and just below a and b

2. If an input condition specifies a number of values, test case should be developed that exercise the minimum and maximum numbers. Values just above and just below the minimum and maximum are also tested

Apply guidelines 1 and 2 to output conditions; produce output that reflects the minimum and the maximum values expected; also test the values just below and just above

If internal program data structures have prescribed boundaries (e.g., an array), design a test case to exercise the data structure at its minimum and maximum boundaries

Page 64: Software Testing

Robustness Testing

Robustness means it is an capability of a software to keep whole the acceptable behavior. This acceptability is a standard known as robustness requirement.

Robustness compromises the appropriate handling of the following: Invalid i/p given by user or any other interfacing

applications. Hardware component failure Software component failure External system failure

Page 65: Software Testing

Cause effect graph

This helps in identifying combination of input conditions in a organized manner, in such a manner that number of test cases does not become improperly larger.

This technique begins with finding out ‘causes’ as well as ‘effect’ of the system under testing.

The inputs (causes) and outputs (effects) are represented as nodes of a cause-effect graph.

In such a graph, a number of intermediate nodes linking causes and effects in the formation of a logical expression

Page 66: Software Testing

A Simple Cause-Effect Graphing Example

I have a requirement that says: “If A OR B, then C.” The following rules hold for this requirement:

If A is true and B is true, then C is true. If A is true and B is false, then C is true. If A is false and B is true, then C is true. If A is false and B is false, then C is false.

Page 67: Software Testing

The cause-effect graph is then converted into a decision table or “truth table” representing the logical relationships between the causes and effects.

Page 68: Software Testing

Static Testing

Static testing is carried out without execution of program.

At different levels, static testing may expose the faulty programming constructs Allows to make it sure that variables used prior to

initialization, declared but not used. Assigned 2 times but not at all used between assignment. Undeclared variables. Interface fault or not Parameter type and number mismatch and uncalled

function as well as procedures.

Page 69: Software Testing

Stages of Static Testing

1. Control Flow Analysis: checking is done on loops which are with either multiple or exit point.

2. Data Use Analysis: checking is done regarding un-initialized variables, variables assigned more than once, variables are declared but not at all used.

3. Interface Analysis: checking consistency of procedure as well as their use.

4. Information Analysis: checking is done regarding dependencies of output variables and also information for code inspection or review.

5. Path Analysis: checking paths through the program as well as sets out the statements executed in that path

Page 70: Software Testing

Principle’s of Static Testing Review each and every coding standard violation.

Code standard can be any one of above general good practice, self imposed company standards.

Review complexity of code Inspect and remove unreachable code,

unreferenced procedure and unused variables. Report each and every types of Data flow

anomalies: Before initialization a variable is used, is a great

abnormality. First a value is assigned to a variable and then without

being used runs out of scope. First value is assigned to a variable and again

reassigned without making use in between. A parameter Mismatch

Suspicious casts.

Page 71: Software Testing

OBJECT-ORIENTED TESTING STRATEGIES

The classical strategy for testing computer software begins with “testing in the small” and works outward toward “testing in the large.”

we begin with unit testing, then progress toward integration testing, and culminate with functional and system testing.

In conventional applications, unit testing focuses on the smallest compliable program unit—the subprogram (e.g., module, subroutine, procedure, component). Once each of these units has been tested individually, it is integrated into a program structure while a series of regression tests are run to uncover errors due to interfacing between the modules and side effects caused by the addition of new units.

Finally, the system as a whole is tested to ensure that errors in requirements are uncovered.

Page 72: Software Testing

Unit Testing in the OO Context In the OO context the smallest testable unit is the class or

the instance of the class. A class encapsulates attributes (variables) & behavior (methods) into one unit.

A class may contain many methods and the same methods may be a part of a many classes. So testing in isolation for a particular method is not possible. It has to be tested as part of the class.

E.g. an operation X is inherited from a super class by many subclasses. And each subclass makes its own type of changes to this operation X. Hence we need to check X for each subclass rather than checking it for the super class.

Class testing for OO software is the equivalent of unit testing for conventional software

Conventional UT tends to focus on the algorithm, whereas OO UT lays emphasis on a module and the data flow across that module.

Page 73: Software Testing

Integration Testing in the OO Context object-oriented software does not have a hierarchical control

structure, conventional top-down and bottom-up integration strategies.

There are two different strategies for integration testing of OO systems The first, thread-based testing, integrates the set of classes

required to respond to one input or event for the system. Each thread is integrated and tested individually. Regression testing is applied to ensure that no side effects occur.

The second integration approach, use-based testing, begins construction of the system by testing those classes (called independent classes) that use very few server classes. the next layer of classes, called dependent classes, that use the independent classes are tested. This sequence of testing layers of dependent classes continues until the entire system is constructed. Unlike conventional integration, the use of drivers and stubs.

Cluster testing is one step in the integration testing of OO software. Here, a cluster of collaborating classes is exercised by designing test cases that attempt to uncover errors in the collaborations.

Page 74: Software Testing

Validation Testing in an OO Context

the validation of OO software focuses on user-visible actions and user-recognizable output from the system.

For validation tests, the use-cases drawn by the tester are of great help.

The use-case provides a scenario that has a high likelihood of uncovered errors in user interaction requirements.

Conventional black-box testing methods can be used to drive validations tests. Also test cases may be derived from the object-behavior model and from event flow diagram created as part of OOA.

Page 75: Software Testing

TEST CASE DESIGN FOR OO SOFTWARE

The following are the points to be considered for OO test case design.

Each test case should be uniquely identified and explicitly associated with the class to be tested.

The purpose of the test should be stated. A list of testing steps should be developed for each test

and should contain: A list of specified states for the object that is to be tested. A list of messages and operations that will be exercised as

a consequence of the test. A list of exceptions that may occur as the object is tested. A list of external conditions (i.e., changes in the

environment external to the software that must exist in order to properly conduct the test).

Supplementary information that will aid in understanding or implementing the test

Page 76: Software Testing

The Test Case Design Implications of OO Concepts

the OO class is the target for test case design. Because attributes and operations are encapsulated, testing operations outside of the class is generally unproductive.

Encapsulation makes it difficult to obtain state information of a class, but some built in operations may be provided to report the value for class attributes.

Inheritance may also lead to further challenges in testing as new context of usage requires retesting.

Page 77: Software Testing

Applicability of Conventional Test Case Design Methods

white-box testing methods an be applied to the operations defined for a class.

Basis path, loop testing, or data flow techniques can help to ensure that every statement in an operation has been tested.

the concise structure of many class operations causes some to argue that the effort applied to white-box testing might be better redirected to tests at a class level.

Black-box testing methods are as appropriate for OO systems. use-cases can provide useful input in the design of black-box testing.

Page 78: Software Testing

Testing methods

o Fault-Based Testingo The tester looks for plausible faults (i.e., aspects of the

implementation of the system that may result in defects). To determine whether these faults exist, test cases are designed to exercise the design or code.

o Because the product or system must conform to customer requirements, the preliminary planning required to perform fault based testing begins with the analysis model.

Class Testing and the Class Hierarchy Inheritance does not obviate the need for thorough

testing of all derived classes. In fact, it can actually complicate the testing process.

Scenario-Based Test Design Scenario-based testing concentrates on what the user

does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as tests.

Page 79: Software Testing

TESTING METHODS APPLICABLE AT THE CLASS LEVEL

Random testing identify operations applicable to a class define constraints on their use identify a minimum test sequence

an operation sequence that defines the minimum life history of the class (object)

generate a variety of random (but valid) test sequencesexercise other (more complex) class instance life histories

Page 80: Software Testing

TESTING METHODS APPLICABLE AT THE CLASS LEVEL Partition Testing

reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software

state-based partitioning attribute-based partitioning category-based partitioning

Page 81: Software Testing

state-based partitioning categorize and test operations based on their ability to

change the state of a class. considering the account class, state operations include

deposit and withdraw, whereas nonstate operations include balance, summarize, and creditLimit.

Tests are designed in a way that exercises operations that change state and those that do not change state separately. Therefore, Test case p1:

open•setup•deposit•deposit•withdraw•withdraw•close Test case p2:

open•setup•deposit•summarize•creditLimit•withdraw•close

Test case p1 changes state, while test case p2 exercises operations that do not change state (other than those in the minimum test sequence).

Page 82: Software Testing

attribute-based partitioning

categorize and test operations based on the attributes that they use

For the account class, the attributes balance and creditLimit can be used to define partitions. Operations are divided into three partitions:

(1) operations that use creditLimit, (2) operations that modify creditLimit, and (3) operations that do not use or modify

creditLimit. Test sequences are then designed for each

partition.

Page 83: Software Testing

category-based partitioning

categorize and test operations based on the generic function each performs

For example, operations in the account class can be categorized in initialization operations (open, setup), computational operations (deposit,withdraw), queries (balance, summarize, creditLimit) and termination operations (close).

Page 84: Software Testing

TESTING METHODS APPLICABLE AT THE CLASS LEVEL Inter-class testing

Multiple Class Testing Tests Derived from Behavior Models

Multiple Class Testing the following sequence of steps to generate multiple class

random test cases: For each client class, use the list of class operations to generate

a series of n random test sequences. The operations will send messages to other server classes.

For each message that is generated, determine the collaborator class and the corresponding operation in the server object.

For each operation in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.

For each of the messages, determine the next level of operations that are invoked and incorporate these into the test sequence.

Page 85: Software Testing

TESTS DERIVED FROM BEHAVIOR MODELS

the state transition diagram represents the dynamic behavior of a class.

The STD for a class can be used to help derive a sequence of tests that will exercise the dynamic behavior of the class.

The tests to be designed should achieve all state coverage. That is all operation sequence should cause the class to transition through all allowable states

More test cases could be derived to make sure that all behavior for the class have adequately exercised.

Page 86: Software Testing

Overview of Website Testing The testing concepts remains dame for web

applications and web testing. But web based systems & application are residing on

network as well as interoperate with various OS, browsers, various types of interface devices like mobile, computers, hardware platform protocol of communication.

This is the reason why there is high probability of errors and these errors are the challenges for the web engineers.

Page 87: Software Testing

Quality…

Content of the web site or web application are evaluated at both syntactical as well as semantic level.

The syntactical level means assessment of text based document involves spelling punctuation and grammar assessment.

At semantic level assessment of correctness of presented information of correctness of presented information consistency transversely content object as well as associated objects also lack of ambiguity or redundancy is evaluated.

Website’s usability is tested to make sure that each and every category can easily use or work with the interface.

Page 88: Software Testing

Quality…

Navigation on the website is tested to make it sure that each and every navigation is working. The navigation errors can comprise dead links improper links and erroneous links.

Performance testing of website is carried out under various operating conditions configurations and loading in order to make sure that the system is responsive to user interaction.

Web application compatibility is tested by executing it on various kinds of hosts which are having different configuration on both the client as well as server side.

testing of security is carried out by performing the assessment of potential vulnerabilities and making an attempt to use each vulnerability.

Page 89: Software Testing

Errors found in Web environment

Some web application are found on the interface which are mainly on specific browser, mobile phone.

Due to incorrect design or inaccurate HTML code or any other programming languages coding certain errors are resulted.

As web application is following client/server architecture errors are very difficult to locate errors across the client the server or network, i.e. three layer architecture.

It might be difficult or not possible to regenerate an error outside the environment where these errors actually came across.

Page 90: Software Testing

Planning Software Testing

The testing process for a project consists of three high-level tasks— test planning, test case design, and test execution.

In general, in a project, testing commences with a test plan and terminates with successful execution of acceptance testing.

Page 91: Software Testing

Test plan

Test plan is a general document for the entire project that defines the scope, approach to be taken, and the schedule of testing, as well as identifies the test items for testing and the personnel responsible for the different activities of testing.

The test planning can be done well before the actual testing commences and can be done in parallel with the coding and design activities.

The inputs for forming the test plan are: (1) project plan,(2) requirements document, and(3) architecture or design document.

Page 92: Software Testing

Test Plan Specification Test unit specification:

A test unit is a set of one or more modules that form a software under test (SUT). The basic idea behind forming test units is to make sure that testing is being performed incrementally, with each increment including only a few aspects that need to be tested.

Features to be tested: Features to be tested include all software features and

combinations of features that should be tested. A software feature is a software characteristic specified or implied by the requirements or design documents. These may include functionality, performance, design constraints, and attributes.

Approach for testing: The approach for testing specifies the overall approach to be

followed in the current project. The techniques that will be used to judge the testing effort should also be specified. This is sometimes called the testing criterion or the criterion for evaluating the set of test cases used in testing.

Page 93: Software Testing

Test Plan Specification

Test deliverables: Testing deliverables should be specified in the test

plan before the actual testing begins. Deliverables could be a list of test cases that were used, detailed results of testing including the list of defects found, test summary report, and data about the code coverage

Schedule and task allocation: This schedule should be consistent with the overall

project schedule. The detailed plan may list all the testing tasks and allocate them to test resources who are responsible for performing them.

Page 94: Software Testing

Test Case Execution

With the specification of test cases, the next step in the testing process is to execute them.

This step is also not straightforward. The test case specifications only specify the set of

test cases for the unit to be tested. However, executing the test cases may require construction of driver modules or stubs.

It may also require modules to set up the environment as stated in the test plan and test case specifications. Only after all these are ready can the test cases be executed.

Page 95: Software Testing

Test Case Execution and Analysis, Defectlogging and tracking

During test case execution, defects are found. These defects are then fixed and testing is done again to verify the fix.

To facilitate reporting and tracking of defects found during testing (and other quality control activities), defects found are often logged.

Defect logging is particularly important in a large software project which may have hundreds or thousands of defects that are found by different people at different stages of the project.

Often the person who fixes a defect is not the person who finds or reports the defect.

Defect logging and tracking is considered one of the best practices for managing a project, and is followed by most software organizations.

Page 96: Software Testing

Life cycle of a defect1. When a defect is found, it is logged in a defect

control system, along with sufficient information about the defect.

2. The defect is then in the state “submitted,”.3. The job of fixing the defect is assigned to some

person, who is generally the author of the document or code in which the defect is found.

4. The defect then enters the “fixed” state.5. a defect that is fixed is still not considered as fully

done. The successful fixing of the defect is verified.6. Once the defect fixing is verified, then the defect can

be marked as “closed.”

Page 97: Software Testing

References

1. Software Testing – Concepts & Practices, Narosa,

2. Software Engineering- A Practitioner’s Approach, 5th Edition, McGraw Hill Int.

Page 98: Software Testing

Thank You!