Chapter 3 SOFTWARE TESTING PROCESS. Testing and Implementation.
- 1. SOFTWARE TESTING PROCESS
2. Testing Process Design test cases Test cases Test data Prepare test data Run program with test data Compare resultsto test cases Test results Test reports 3. Testing Process Goal
4. Validation Testing
- To demonstrate to the developer and the system customer that the software meets its requirement;
- A successful test shows that the system operates as intended.
5. Defect Testing
- To discover faults or defects in the software where its behavior is incorrect or not in conformance with its specification;
- A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system.
6. Effectiveness vs Efficiency
- Relative ability of testing strategy to find bugs in the software.
- Relative cost of finding a bug in the software under test.
7. What is a successful test?
- Status of a completed test case whose actual results are the same as the expected results.
- Status of a completed software test case whose actual results differ from the expected ones.
- Successfultest (i.e., we want this to happen).
8. Software Testing Features
- The different levels of the system that testing addresses.
- Some of the approaches to building and applying tests.
- How we manage the testing process to maximize the effectiveness and efficiency of the process for a given product.
9. Testing Scope
- Unit testing testing in a small
- Exercising the smallest executable units of the system.
- Integration testing testing the build
- Finding problems in the interaction between components.
- System testing testing in the large
- Putting the entire system to test.
10. Testing Categorization
11. Testing in the small
- Exercising the smallest individually executable code units (isolation).
- It is a defect testing process.
- Assure correct functional behavior of units.
- Usually performed by programmers
12. (..cont) Testing in the small
- Individual functions or methods within an object;
- Object classes with several attributes and methods;
- Composite components with defined interfaces used to access their functionality.
13. (..cont) Testing in the small
- Complete test coverage of a class involves:
- Testing all operations associated with an object;
- Setting and interrogating all object attributes;
- Exercising the object in all possible states.
- Inheritance makes it more difficult to design object class tests as the information to be tested is not localized.
14. An Example of Object Class Testing
- Need to define test cases forreportWeather, calibrate, test, startup , andshutdown .
- Using a state model, identify sequences of state transitions to be tested and the event sequences to cause these transitions
- Example: waiting->calibrating->testing->transmitting->waiting.
weatherStation Identifier reportWeather() calibrate((instruments) test() startup(instruments) shutdown(instruments) 15. Testing the build
- Exercising two or more units or components.
- Detect interfaces errors;
- Assure the functionality of the combined units.
- Performed by programmers or testing group.
- Strategy for combining units?
- Compatibility with third-party components?
- Correctness of third-party components?
17. Integration Testing
- Involves building a system from its components and testing it for problems that arise from component interactions.
- To simplify error localization, systems should be incrementally integrated.
18. Incrementally Integrated 19. Interface Testing
- To detect faults due to interface errors or invalid assumptions about interfaces.
- Particularly important for object-oriented development as objects are defined by their interfaces.
20. Interface types
- Data passed from one procedure to another.
- Block of memory is shared between procedures or functions.
- Sub-system encapsulates a set of procedures to be called by other sub-systems.
- Message passing interfaces
- Sub-systems request services from other sub-systems.
21. Interface Error
- A calling component calls another component and makes an error in its use of its interface. (Parameters in the wrong order).
- Interface misunderstanding
- A calling component embeds assumptions about the behavior of the called component which are incorrect.
22. (..cont) Interface Error
- The called and the calling component operate at different speeds and out-of-date information is accessed.
23. Testing in the large: System
- Exercising the functionality, performance, reliability, and security of the entire system.
- Find errors in the overall system behavior;
- Establish confidence in system functionality;
- Validate non-functional system requirements.
- Usually performed by a separate testing group.
24. Testing in the large: Accept
- Operating the system in the user environment with standard user input scenario.
- Evaluate whether the system meets the customer criteria
- Determine whether the customer will accept the system.
- Usually performed by end user.
25. Testing in the large: Operation
- Testing modified versions of a previously validated system.
- Assuring that changes to the system have not introduced new errors.
- Performed by the system itself or by a regression test group.
- Capture / Replay (CR) tools.
26. Test Generation Method
- Objective is to find faults in the software. (Unit testing).
- Use of a data or behavior model of the software. (finite state machine).
27. Test Management
- Effective documentation and control of the whole test process.
- Documentation of tests and control of the test codebase.
- Independence of test activities.
- Termination: deciding when to stop.
- Managing effective reuse.
28. (..cont) Test Management
- Test case generation can involve massive amounts of data for