10
Dept of CSE Software Engineering Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech UNIT – VI Syllabus: Testing Strategies: A strategic approach to software testing, test strategies for conventional software, Black-Box and White-Box testing, Validation testing, System testing, the art of Debugging. Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model, Metrics for source code, Metrics for testing, Metrics for maintenance. 1. What is software testing? Discuss about Strategic approach to software testing? Testing is an essential activity in a software process. Planning for software testing involves establishing an overall testing strategy for a project. The testing strategy includes organizing testing at three levels—unit, integration, and high-order. It also involves procuring tools to automate testing and identifying the people who will perform testing. In addition, planning is required for debugging—the process of diagnosing and fixing the problems detected during testing. Software Testing Strategies includes: Planning for Software Testing An Overview of the Testing Strategy Unit Testing Integration Testing High-order Testing Roles-and Organization for Testing Debugging Strategic Approach to Testing - 1 Testing begins at the component level and works outward toward the integration of the entire computer-based system. Different testing techniques are appropriate at different points in time. The developer of the software conducts testing and may be assisted by independent test groups for large projects. The role of the independent tester is to remove the conflict of interest inherent when the builder is testing his or her own product. Strategic Approach to Testing - 2 Testing and debugging are different activities. Debugging must be accommodated in any testing strategy. Need to consider verification issues Need to Consider validation issues

UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

  • Upload
    lamkien

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

UNIT – VI Syllabus: Testing Strategies: A strategic approach to software testing, test strategies for conventional software, Black-Box and White-Box testing, Validation testing, System testing,

the art of Debugging.

Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model,

Metrics for source code, Metrics for testing, Metrics for maintenance.

1. What is software testing? Discuss about Strategic approach to

software testing?

Testing is an essential activity in a software process. Planning for software testing

involves establishing an overall testing strategy for a project.

The testing strategy includes organizing testing at three levels—unit, integration, and

high-order. It also involves procuring tools to automate testing and identifying the people

who will perform testing. In addition, planning is required for debugging—the process of

diagnosing and fixing the problems detected during testing.

Software Testing Strategies includes:

• Planning for Software Testing

• An Overview of the Testing Strategy

• Unit Testing

• Integration Testing

• High-order Testing

• Roles-and Organization for Testing

• Debugging

Strategic Approach to Testing - 1

• Testing begins at the component level and works outward toward the integration

of the entire computer-based system.

• Different testing techniques are appropriate at different points in time.

• The developer of the software conducts testing and may be assisted by

independent test groups for large projects.

• The role of the independent tester is to remove the conflict of interest inherent

when the builder is testing his or her own product.

Strategic Approach to Testing - 2

• Testing and debugging are different activities.

• Debugging must be accommodated in any testing strategy.

• Need to consider verification issues

• Need to Consider validation issues

Page 2: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

Strategic Testing Issues - 1

•Specify product requirements in a quantifiable manner before testing starts.

•Specify testing objectives explicitly.

•Identify the user classes of the software and develop a profile for each.

•Develop a test plan that emphasizes rapid cycle testing.

Strategic Testing Issues - 2

•Build robust software that is designed to test itself (e.g. use anti-bugging).

•Use effective formal reviews as a filter prior to testing.

•Conduct formal technical reviews to assess the test strategy and test cases.

2. Explain the various Stages of Testing in software?

•Module or unit testing.

•Integration testing,

•Function testing.

•Performance testing.

•Acceptance testing.

•Installation testing.

3. What is a Unit Testing? Write the brief notes about different testing?

Unit testing can be done the following functions:

•Program reviews.

•Formal verification.

•Testing the program itself. - Black box and white box testing.

**Black Box or White Box?

•Maximum # of logic paths - determine if white box testing is possible.

•Nature of input data.

•Amount of computation involved.

•Complexity of algorithms.

Unit Testing Details

•Interfaces tested for proper information flow.

•Local data are examined to ensure that integrity is maintained.

•Boundary conditions are tested.

•Basis path testing should be used.

•All error handling paths should be tested.

•Drivers and/or stubs need to be developed to test incomplete software.

Page 3: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

Generating Test Data

•Ideally want to test every permutation of valid and invalid inputs

•Equivalence partitioning it often required to reduce to infinite test case sets

–Every possible input belongs to one of the equivalence classes.

–No input belongs to more than one class.

–Each point is representative of class.

Regression Testing

•Check for defects propagated to other modules by changes made to existing program

–Representative sample of existing test cases is used to exercise all software

functions.

–Additional test cases focusing software functions likely to be affected by the

change.

–Tests cases that focus on the changed software components.

Integration Testing

•Bottom - up testing (test harness).

•Top - down testing (stubs).

•Modified top - down testing - test levels independently.

•Big Bang.

•Sandwich testing.

Top-Down Integration Testing

•Main program used as a test driver and stubs are substitutes for components

directly subordinate to it.

•Subordinate stubs are replaced one at a time with real components (following the

depth-first or breadth-first approach).

•Tests are conducted as each component is integrated.

•On completion of each set of tests and other stub is replaced with a real

component.

•Regression testing may be used to ensure that new errors not introduced.

Bottom-Up Integration Testing

•Low level components are combined in clusters that perform a specific software

function.

•A driver (control program) is written to coordinate test case input and output.

•The cluster is tested.

•Drivers are removed and clusters are combined moving upward in the program

structure.

Page 4: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

Thread Testing: Testing set of actions associated with particular module functions

Validation Testing

•Ensure that each function or performance characteristic conforms to its specification.

•Deviations (deficiencies) must be negotiated with the customer to establish a means for

resolving the errors.

•Configuration review or audit is used to ensure that all elements of the software

configuration have been properly developed, cataloged, and documented to allow its

support during its maintenance phase.

Acceptance Testing

•Making sure the software works correctly for intended user in his or her normal work

environment.

•Alpha test - version of the complete software is tested by customer under the

supervision of the developer at the developer’s site

•Beta test- version of the complete software is tested by customer at his or her own site

without the developer being present

Acceptance Testing Approaches

•Benchmark test.

•Pilot testing.

•Parallel testing.

System Testing

•Recovery testing:-checks system’s ability to recover from failures

•Security testing:-verifies that system protection mechanism prevents improper

penetration or data alteration

•Stress testing:-program is checked to see how well it deals with abnormal resource

demands

•Performance testing:-tests the run-time performance of software

Page 5: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

4. What are the steps involved in Performance Testing?

•Stress test.

•Volume test.

•Configuration test (hardware & software).

•Compatibility.

•Regression tests.

•Security tests.

•Timing tests.

•Environmental tests.

•Quality tests.

•Recovery tests.

•Maintenance tests.

•Documentation tests.

•Human factors tests.

5. Software Testing Life Cycle

•Establish test objectives.

•Design criteria (review criteria).

–Correct.

–Feasible.

–Coverage.

–Demonstrate functionality.

•Writing test cases.

•Testing test cases.

•Execute test cases.

•Evaluate test results.

6. What are the Testing Tools in software testing?

•Simulators.

•Monitors.

•Analyzers.

•Test data generators.

7. Write the short notes about Debugging and its approaches?

Bug is nothing but an error in the software

•Debugging (removal of a defect) occurs as a consequence of successful testing.

•Some people better at debugging than others.

•Is the cause of the bug reproduced in another part of the program?

•What “next bug” might be introduced by the fix that is being proposed?

•What could have been done to prevent this bug in the first place?

Page 6: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

Debugging Approaches

•Brute force: -memory dumps and run-time traces are examined for clues to error causes

•Backtracking: -source code is examined by looking backwards from symptom to

potential causes of errors

•Cause elimination: -uses binary partitioning to reduce the number of locations potential

where errors can exist

8. What is a software metrics? Explain the role and the need in the

today’s software world?

Software metric is a measure of some property of a piece of software or its

specifications. Since quantitative measurements are essential in all sciences, there is a

continuous effort by computer science practitioners and theoreticians to bring similar

approaches to software development. The goal is obtaining objective, reproducible and

quantifiable measurements, which may have numerous valuable applications in schedule

and budget planning, cost estimation, quality assurance testing, software debugging,

software performance optimization, and optimal personnel task assignments.

Software process and product metrics are quantitative measures that enable software

people to gain insight into the efficacy of the software process and the projects that are

conducted using the process as a framework. Basic quality and productivity data are

collected. These data are then analyzed, compared against past averages, and assessed to

determine whether quality and productivity improvements have occurred. Metrics are

also used to pinpoint problem areas so that remedies can be developed and the software

process can be improved

Limitations

As software development is a complex process, with high variance on both

methodologies and objectives, it is difficult to define or measure software qualities and

quantities and to determine a valid and concurrent measurement metric, especially when

making such a prediction prior to the detail design. Another source of difficulty and

debate is in determining which metrics matter, and what they mean. The practical utility

of software measurements has thus been limited to narrow domains where they include:

• Schedule

• Size/Complexity

• Cost

• Quality

Common goal of measurement may target one or more of the above aspects, or the

balance between them as indicator of team’s motivation or project performance

Page 7: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

9. Write the short notes on metrics for object oriented design?

Much about OO design is subjective - a good programmer “knows” what makes good

code, to scale in size and complexity, a more objective view can benefit both expert

and novice

� Size

� Complexity

� Coupling

� Sufficiency

� Completeness

� Cohesion

� Primitiveness

� Similarity

� Volatility

Size: defined in terms of four views:

� Population: static count of OO entities such as classes or operations

� Volume: identical to population measure but taken dynamically at a given

instant in time

� Length: measure of a chain of interconnected design elements

� Functionality: indirect indication of the value delivered to the customer

Complexity: viewed in terms of structural characteristics by examining how classes are

related to one another

Coupling: he physical connections between elements (e.g. the number of messages

passed between objects)

Sufficiency: the degree to which a design component fully reflects all properties of the

application object it is modeling

Completeness: like sufficiency, but the abstraction is considered from multiple points of

view, rather than simply the current application

Cohesion: the degree to which the OO properties are part of the problem or design

domain

Primitiveness: applied to both operations and classes, the degree to which an operation is

atomic (similar to simplicity)

Similarity: the degree to which multiple classes are similar in terms of structure,

function, behavior, or purpose

Volatility: a measure of the likelihood that a change in design will occur

Page 8: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

10. Explain the Metrics for Design Model, Metrics for source code,

Metrics for testing, Metrics for maintenance.

Measurement principles:

Formulation - derivation of software metrics appropriate for the software being

considered

Collection - accumulating data required to derive the formulated metrics

Analysis - computation of metrics and application of mathematical tools

Interpretation - evaluation of metrics in an effort to gain insight into the quality of

the system

Feedback - recommendations derived from the interpretation of metrics

Attributes of Effective Software Metrics

Simple and computable

Empirically and intuitively persuasive

Consistent and objective

Consistent in units and dimensions

Programming language independent

Effective mechanism for quality feedback

Function Based Metrics

The Function Point (FP) metric can be used as a means for predicting the size of a

system (derived from the analysis model).

number of user inputs

number of user outputs

number of user inquiries

number of files

number of external interfaces

Metrics for Requirements Quality

Requirements quality metrics - completeness, correctness, understandability,

verifiability, consistency, achievability, traceability, modifiability, precision, and

reusability - design metric for each. See Davis.

E.g., let nr = nf + nnf , where

nr = number of requirements

nf = number of functional requirements

nnf = number of nonfunctional requirements

Specificity (lack of ambiguity)

Q = nui/nr

nui - number of requirements for which all reviewers had identical

interpretations

For completeness,

Q = nu/(ni× ns)

nu = number of unique function requirements

Page 9: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

ni = number of inputs specified

ns = number of states specified

High-Level Design Metrics

Structural Complexity

S(i) = f2out(i)

fout(i) = fan-out of module i

Data Complexity

D(i) = v(i) / [fout(i) +1]

v(i) = # of input and output variables to and from module i

System Complexity

C(i) = S(i) + D(i)

Morphology Metrics

size = n + a

n = number of modules

a = number of arcs (lines of control)

Arc-to-node ratio, r = a/n

depth = longest path from the root to a leaf

width = maximum number of nodes at any level

Morphology Metrics

Size depth width arc-to node ratio

Metrics for Source Code

D n1 = the number of distinct operators

D n2 = the number of distinct operands

D N1 = the total number of operator occurrences

D N2 = the total number of operand occurrences

Length: N = n1log2n1 + n2log2n2

Volume: V = Nlog2(n1 + n2)

Metrics for Testing

D Analysis, design, and code metrics guide the design and execution of test

cases.

a

b c d e

f g i j k l

h m n p q r

Page 10: UNIT – VI - nritech.edu.innritech.edu.in/eLearning/CSE-3-1/CSE-3-1-SE-UNIT-6.pdf · UNIT – VI Syllabus: Testing Strategies: ... Black-Box and White-Box testing, Validation testing,

Dept of CSE Software Engineering

Prepared By: N Hari Babu, HOD & Associate Professor, NRI Inst of Tech

D Metrics for Testing Completeness

D Breadth of Testing - total number of requirements that have been

tested

D Depth of Testing - percentage of independent basis paths covered by

testing versus total number of basis paths in the program.

D Fault profiles used to prioritize and categorize errors uncovered.

Metrics for Maintenance

D Software Maturity Index (SMI)

D MT = number of modules in the current release

D Fc = number of modules in the current release that have been

changed

D Fa = number of modules in the current release that have been

added

D Fd = number of modules from the preceding release that were

deleted in the current release

SMI = [MT - (Fc + Fa + Fd)] / MT