Software Testing. Recap Software testing – Why do we do testing? – When it is done? – Who does it? Software testing process / phases in software testing

Embed Size (px)

Citation preview

  • Slide 1
  • Software Testing
  • Slide 2
  • Recap Software testing Why do we do testing? When it is done? Who does it? Software testing process / phases in software testing Levels of testing
  • Slide 3
  • Contents Testing methodologies Debugging process
  • Slide 4
  • TESTING METHODOLOGIES AND TYPES
  • Slide 5
  • Testing Methodologies / Types Black box testing White box testing Incremental/Thread testing
  • Slide 6
  • Black box testing No knowledge of internal design or code required. Tests are based on requirements and functionality Not based on any knowledge of internal design or code Covers all combined parts of a system Tests are data driven It uncovers: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors Initialization and termination errors
  • Slide 7
  • Types of black box testing Functional testing System testing End-to-end testing Sanity testing Regression testing Acceptance testing Load testing Stress testing Install/uninstall testing Recovery testing Compatibility testing Exploratory testing Comparison testing Alpha testing Beta testing Mutation testing
  • Slide 8
  • Functional testing Black box type testing geared to functional requirements of an application. Done by testers. System testing Black box type testing that is based on overall requirements specifications; covering all combined parts of the system. End-to-end testing Similar to system testing; involves testing of a complete application environment in a situation that mimics real-world use. Mutation testing To determining if a set of test data or test cases is useful, by deliberately introducing various bugs. Re-testing with the original test data/cases to determine if the bugs are detected.
  • Slide 9
  • Sanity testing Initial effort to determine if a new software version is performing well enough to accept it for a major testing effort. Regression testing Re-testing after fixes or modifications of the software or its environment. Acceptance testing Final testing based on specifications of the end-user or customer Load testing Testing an application under heavy loads. Eg. Testing of a web site under a range of loads to determine, when the system response time degraded or fails.
  • Slide 10
  • Stress Testing Testing under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database etc. Term often used interchangeably with load and performance testing. Performance testing Testing how well an application complies to performance requirements Install/uninstall testing Testing of full, partial or upgrade install/uninstall process. Recovery testing Testing how well a system recovers from crashes, HW failures or other problems. Compatibility testing Testing how well software performs in a particular HW/SW/OS/NW environment.
  • Slide 11
  • Exploratory testing / ad-hoc testing Informal SW test that is not based on formal test plans or test cases; testers will be learning the SW in totality as they test it. Comparison testing Comparing SW strengths and weakness to competing products. Alpha testing Testing done when development is nearing completion; minor design changes may still be made as a result of such testing. Beta-testing Testing when development and testing are essentially completed and final bugs and problems need to be found before release.
  • Slide 12
  • White box testing / Structural testing Based on knowledge of internal logic of an application's code Based on coverage of code statements, branches, paths, conditions Tests are logic driven It ensures All independent paths within a module have been exercised at least once Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bounds Exercise internal data structures to ensure their validity
  • Slide 13
  • Loop testing (White box testing) This white box technique focuses on the validity of loop constructs. 4 different classes of loops can be defined Simple loops Nested loops Concatenated loops Unstructured loops
  • Slide 14
  • Guidelines for Loop Testing Guidelines for Simple Loops Try to design a test in which the loop body isn't executed at all. Try to design a test in which the loop body is executed exactly once. Try to design a test in which the loop body is executed exactly twice. Design a test in which a loop body is executed some ``typical'' number of times. If there is an upper bound, n, on the number of times the loop body can be executed, then the following cases should also be applied. Design a test in which the loop body is executed exactly n-1 times. Design a test in which the loop body is executed exactly n times. Try to design a test causing the loop body to be executed exactly n+1 times.
  • Slide 15
  • Guidelines for Loop Testing Guidelines for Nested Loops Ensure that each loop is tested at its boundaries while using a number of tests that is only linear in the number of loops: Conduct the ``simple loop tests'' for the innermost loop (which is a simple loop), while keeping the number of iterations of the outer loops at their minimal nonzero values. Work outward, conducting tests (as described above) for each loop, while holding the number of iterations of outer loops at the minimal nonzero values possible (that is, the minimal values that can be used when the inner loop body is to be executed the desired number of times), and holding the number of iterations of inner loops at ``typical'' values.
  • Slide 16
  • Guidelines for Loop Testing Guidelines for Concatenated Loops If the loops are ``independent,'' so that the number of iterations used for one loop doesn't depend on the number of iterations used for any other(s), then it is sufficient to apply the guidelines for simple loops to each of the loops in the sequence. On the other hand, if the number of iterations used for one loop does depend on the number of iterations used for another, then the following guidelines should be used instead. Conduct ``simple loop tests'' for the bottom (or ``last'') loop in the sequence, holding the number of iterations of the higher (or ``previous'') loops at minimal values. Work up toward the top loop, considering each loop in turn, and applying ``simple loop tests'' for each loop in turn, keeping the number of iterations of upper loops at minimal values and keeping the number of iterations of lower loops at typical values.
  • Slide 17
  • Other white box testing techniques Statement Coverage execute all statements at least once Decision Coverage execute each decision direction at least once Condition Coverage execute each decision with all possible outcomes at least once Decision / Condition coverage execute all possible combinations of condition outcomes in each decision. Multiple condition Coverage Invokes each point of entry at least once.
  • Slide 18
  • Statement Coverage Examples Eg. A + B If (A = 3) Then B = X + Y End-If While (A > 0) Do Read (X) A = A - 1 End-While-Do
  • Slide 19
  • Decision Coverage - Example If A 20 Then B = X + Y Condition Coverage Example A = X If (A > 3) or (A < B) Then B = X + Y End-If-Then While (A > 0) and (Not EOF) Do Read (X) A = A - 1 End-While-Do
  • Slide 20
  • Incremental/Thread Testing Incremental Testing A disciplined method of testing the interfaces between unit-tested programs as well as between system components. Involves adding unit-testing program module or component one by one, and testing each result and combination. Thread testing When testers focus on testing individual Logical execution paths in context of entire system Each Thread has a status associated with it for a particular build
  • Slide 21
  • Types of incremental testing Top-down Testing form the top of the module hierarchy and work down to the bottom. Modules are added in descending hierarchical order. Bottom-up Testing from the bottom of the hierarchy and works up to the top. Modules are added in ascending hierarchical order.
  • Slide 22
  • Testing levels and techniques applied Testing Levels/ Techniques White Box Black Box IncrementalThread Unit Testing X Integration Testing XX X System Testing X Acceptance Testing X
  • Slide 23
  • When is Testing Complete? There is no definitive answer to this question Every time a user executes the software, the program is being tested Sadly, testing usually stops when a project is running out of time, money, or both One approach is to divide the test results into various severity levels Then consider testing to be complete when certain levels of errors no longer occur or have been repaired or eliminated
  • Slide 24
  • Ensuring a Successful Software Test Strategy Specify product requirements in a quantifiable manner long before testing commences State testing objectives explicitly in measurable terms Understand the user of the software (through use cases) and develop a profile for each user category Develop a testing plan that emphasizes rapid cycle testing to get quick feedback to control quality levels and adjust the test strategy Build robust software that is designed to test itself and can diagnose certain kinds of errors Use effective formal technical reviews as a filter prior to testing to reduce the amount of testing required Conduct formal technical reviews to assess the test strategy and test cases themselves Develop a continuous improvement approach for the testing process through the gathering of metrics
  • Slide 25
  • Summary Testing methods / Types Black Box testing White Box testing Incremental / Thread testing Testing levels Vs testing methods Testing Strategy