Unit 1: Introduction to Software Testing

  • Published on

  • View

  • Download


  • 1. Unit 1: Introduction to Software Testing Reading: JKR Ch. 4, W Ch. 1-2

2. What is software testing? 3. What is the goal of software testing? 4. Test Terminology

  • test case : A set of inputs, execution conditions, and a pass/fail criterion.
    • Example:height input is 5, width input is Hello, the test passes if the program produces an error message.
  • test or test execution : The activity of executing test cases and evaluating their results.
  • test suite : A set of test cases.

5. More Test Terminology

  • defect : A mistake in the source code (static)
    • Synonyms:fault ,bug
  • infection : An incorrect program state while the program is running (dynamic)
  • failure : An observable incorrect program behavior (output)
  • debugging : The process of finding and correcting a fault in a program given a failure

6. From Defects to Failures

  • Programmer creates adefect
    • Doesnt necessarily mean the programmer is at fault
  • The defect causes aninfection
  • The infection propagates
  • The infection causes afailure
  • Goal of testing is to findfailures
    • With help of tools, could also findinfections
    • Debuggingis necessary to find thedefect

7. Software Testing is Hard. Why? 8. Challenge of Testing

  • class Roots { // Solve ax 2+ bx + c = 0 public roots(double a, double b,
  • double c) { }
  • // Result: values for x double root_one, root_two;
  • }

9. Oracle Problem

  • Given a particular input, how do we know the expected output is correct?
  • How to address the oracle problem?

10. Other Techniques of Finding Bugs? 11. Program Analysis

  • Create software tools that detect defects by analyzing the source code automatically
  • Most tools look for specific properties:
    • Improper memory usage, race conditions, security violations
  • Benefits:
    • Can symbolically handle all inputs in a single analysis
    • Can potentially prove that a program is correct with respect to a property
    • Can be applied at any development stage
  • Downsides:
    • Proving a property is often computationally infeasible, inaccuracy is introduced to address this issue
    • Not all behaviors can be effectively captured using automated analysis

12. Inspection

  • Inspection : manual analyzing the software
  • Inspection can be applied to essentially any document:
    • requirements
    • architectural and detailed design documents
    • test plans and test cases
    • program source code
  • Secondary benefits:
    • spreading good practices
    • instilling shared standards of quality
  • Downsides:
    • takes a considerable amount of time
    • reinspecting a changed component can be expensive

13. Verification and Validation (V&V)

  • Validation :does the software system meets the user's real needs?
  • are we building the right software?
  • Verification :does the software system meets the requirements specifications?
  • are we building the software right?

14. Verification and Validation (V&V) Actual Requirements SW Specs System Validation Verification Includes usability testing, user feedback Includes testing, inspections, static analysis 15. Testing in the SW Development Process

  • Verification is one of the main phases of the waterfall model:
  • The earlier a defect is detected, the cheaper it is to fix.
  • Quality needs to be integrated into the entire process, not just added at the end.

16. Testing in the SW Development Process

  • Testing is executed late in the process since it requires code to run.
  • However testing activities can begin much earlier in the development process.
  • Testing can be begin as soon as code is available:
    • Unit testing
    • Integration testing
    • System testing
    • Regression testing
  • Testing is not finished when the product is released.

17. Early Testing Activities

  • Early test generation
    • Tests generated independently from code
    • Generation of test cases may highlight inconsistencies and incompleteness of the corresponding specifications
  • Planning the testing process
    • How much testing is needed?
    • What order do we test the software?
    • Risk management
  • Designing for test
    • Scaffolding permits partial code to be tested
    • Oracles checks the results of tests

18. Unit Testing

  • Unit testing : Testing of a unit in the program.
    • Unit: smallest part of the program only assigned to one developer.
      • Often a single function or class.
    • Focuses on the functionality of that unit.
    • Often done by the developer
    • Automated using unit testing tools
      • Visual Studio
      • JUnit

19. Integration Testing

  • Integration testing : Testing the interactions of the different units.
    • Assumes unit testing is complete or will catch errors within the unit.
    • Focuses on the interactions between units.
    • Depends on how the software constructed.
      • Build-order: what units are built first.
      • Controllability: controlling the inputs necessary to carry out the test.
      • Observability: viewing the outputs necessary to determine if the test passed or now.
    • Especially important in object-oriented development.
    • Often done by a testing team.
    • Can be tested manually or can be automated.

20. System Testing

  • System testing : Testing the entire system.
    • Requires the entire system to be present.
    • Used to test common usage scenarios.
      • Validation: Are we building the right software?
      • Verification: Are we building the software right?
    • Used to test non-functional requirements.
      • Performance
      • Compatibility
      • Usability
      • Security
    • Best tested manually.
      • Testing approaches are vastly different for non-functional requirements.

21. Regression Testing

  • Regression Testing : A suite of tests used to make sure that the program has not regressed.
    • Used to make sure that new features / modifications do not break existing working functionality.
    • Consists of tests that captures all of the functionality of the program.
      • Especially important are tests that have found failures in the past!
    • Tests are automated.
      • Typically run before changes are committed.

22. Testing still needed late

  • Could provide a mechanism for detecting bugs when the product has been released out in the field
  • Testing is needed for each
    • successive version of the software
    • time a bug has been fixed
  • Post-mortem testing analysis
    • Improving the testing process
    • Improving the design and development process

23. How much testing is needed? 24. Product Quality Goals

  • internal qualities: not directly visible to the end user
    • maintainability
    • reusability
  • external qualities: directly visible to the end user
    • usefulness qualities: usability, performance, portability, interoperability
    • dependability qualities: correctness, reliability, safety, robustness

25. Dependability Qualities

  • Correctness : A program is correct if it is consistent with its specification
    • seldom practical for non-trivial systems
  • Reliability : Likelihood of correct function for some unit of behavior
    • relative to a specification and usage profile
    • statistical approximation to correctness (100% reliable = correct)
  • Safety : Preventing hazards (certain undesirable behaviors)
  • Robustness : Acceptable (degraded) behavior under extreme conditions

26. Example:Dependability Qualities

  • Correctness, reliability:Let traffic pass according to correct pattern and central scheduling
  • Safety:Never signal conflicting greens
  • Robustness:Provide degraded function if needed
    • Blinking red / yellow is better than no lights
    • No lights is better than conflicting greens

27. Tradeoffs

  • From a managerial point-of-view,there are tradeoffs:
    • higher qualitymore time
    • higher qualitymore costs
  • For mass-market products, will typically sacrifice some level of quality to reduce time-to-market and costs.
  • Some software requires an extremely high level of quality
    • medical devices
    • avionics

28. Test Adequacy

  • Test adequacy measures how effective our test suite is.
    • If the system passes an adequate suite of test cases, then it must be correct.
  • How do we measure the effectiveness of a test suite?

29. Adequacy Criteria as Design Rules

  • Many design disciplines employ design rules:
    • Traces (on a chip, on a circuit board) must be at least ___ wide and separated by at least ___
    • The roof must have a pitch of at least ____ to shed snow
    • Interstate highways must not have a grade greater than 6% without special review and approval
  • Design rules do not guarantee good designs
    • Good design depends on talented, creative, disciplined designers.
    • Design rules help them avoid or spot flaws.
    • Test design is no different.

30. Practical (in)Adequacy Criteria

  • Criteria that identify inadequacies in test suites:
    • If the specification describes different treatment in two cases, but the test suite does not check that the two cases are in fact treated differently, we may conclude that the test suite is inadequate to guard against faults in the program logic.
    • If no test in the test suite executes a particular program statement, the test suite is inadequate to guard against faults in that statement.
  • If a test suite fails to satisfy some criterion, the obligation that has not been satisfied may provide some useful information about improving the test suite.

31. Adequacy Criteria

  • Test obligation : A partial test case specification, requiring some property deemed important to thorough testing.
  • Adequacy criterion : A predicate that is true (satisfied) of a program, test suite pair.Requires both of these to be true:
    • All tests must be passed
    • All test obligations must be satisfied by the test suite.
  • If a test suite satisfies all of the adequacy criteria, is the test suite effective?

32. Where do adequacy criteria come from?

  • Functional (black box): from software specifications
    • Example: Make sure all the different situations detailed in the specification are tested.
  • Structural (white or glass box): from code
    • Example: Traverse each program loop one or more times.
  • Model-based: from model of system
    • Example: Exercise all transitions in communication protocol model

33. Uses of Adequacy Criteria

  • Test selection approaches
    • Guidance in devising a thorough test suite
      • Example: A functional-based criterion may suggest test cases covering representative combinations of values
      • Example: A structural-based criterion may require each statement to be executed once
  • Revealing missing tests
    • Post hoc analysis: What might I have missed with this test suite?
  • Often in combination
    • First, design test suite from specifications using functional testing techniques.
    • Then, use structural criterion to highlight missed logic

34. Discussion: Prevention vs. Detection

  • One way of reducing the number of bugs in a program is to prevent them from being introduced in the first place.Should more effort go into prevention opposed to detection?

35. Discussion: Manual vs. Automated

  • Should tests be automated or carried out manually?


View more >