21
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Embed Size (px)

Citation preview

Page 1: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

T. E. Potok - University of Tennessee

Software Engineering

Dr. Thomas E. PotokAdjunct Professor UT

Research Staff Member ORNL

Page 2: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

T. E. Potok - University of Tennessee

Testing

Page 3: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

3

Testing Objectives

To ensure that the final product works according to the requirements

and that the product works correctly in a wide variety of situations

Page 4: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

4

Validation

Ensuring that the product meets the initial requirements or contract

Demonstration may be required to validate that the software performs as expected– Performance requirements– Functional requirements– Usability requirements

Page 5: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

5

Methods

Black box White box Random User centered Regression

Page 6: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

6

Black Box

Treat the items to be tested as a black box

Concerned only about inputs and outputs

Easy test to perform– Provide an input set, compare the results

against the know correct answers May not exercise key branches of the

code

Page 7: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

7

White Box

Test all of the possible paths within a section of code

Requires strong knowledge of the code, and great care in generating test input

The number of possible paths through a complex section of code can be very large

Very thorough test, may be impractical

Page 8: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

8

Random Testing

Randomly generate input, then validate that the output is correct

Easy test to perform May not fully exercise the code

Page 9: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

9

User Centered

Determine what features and functions that most users will use

Test the software based on expected usage patterns

Will find most bugs a users will encounter

Will miss bugs found by more sophisticated users

Page 10: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

10

Regression Testing

Store up test cases from previous releases

Run these test for every new release, – New changes will not “break” older

functions– The tests are well understood, and

provide a good test of the over all system

Page 11: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

11

How much testing is required Theoretically, there are a fixed

number of errors in a section of software

Further, this number can be estimated given the size and complexity of the code

Testing this code can then be based on finding some percentage of the latent bugs in the system

Page 12: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

12

Error Classification

It is even possible to be able to classify the type of errors that will be found

If this is known, then the type of testing that is needed can be determined as well

For example, if performance errors are expected, then performance testing can be applied

Page 13: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

13

Testing Types

Many different types of testing– Unit – Functional – System– Usability– Performance

Page 14: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

14

Unit Testing

Verifying that a single module works correctly.

White box testing can be very effective, particularly for small modules

Testing is generally informal, often performed by the author of the code

Page 15: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

15

Function Testing

This involves testing several modules that make of part of the functionality of the system

White box testing can be used, but is often impractical

Black box, or random testing is often used

The testing is normally not done by the authors of the code

Page 16: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

16

System Testing

This involves testing the entire system

White box is usually impractical Black box and random testing can

be used User centered testing can be quite

effective Automated testing procedures can

be used as well

Page 17: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

17

Usability Tests

Difficult test to perform Find collection of typical users Video tape the user performing

typical features of a product Fix and adjust the system, then

retest

Page 18: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

18

Performance Test

Record various performance characteristics of the system– Response time– CPU, Memory, and disk utilization

Based on various operations performed

Under a variety of expected computer and software loads

Page 19: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

19

Quality

What defines quality?– Error free?– Easy to use?– Wide functionality?

Can quality be too high? When do you stop using a software

package because of quality

Page 20: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

20

Testing Goals

Provide the minimum amount of testing that ensures maximum quality for the user

Testing little used features, or over testing other features is a waste of time

Removing every bug in a system will take a great deal of time, and money, even though a user may never experience these errors

Page 21: T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL

Software Engineering CS 594

T. E. Potok - University of Tennessee

21

Mean Time to Failure

A typical measurement of quality if MTTF

A light bulb has a MTTF of many hours, and a known failure rate

Software may have a similar failure rate, but it is much harder to determine

In some cases these values are critical– Telecommunications, military, or space

travel