Software Testing - University of Southern California Software Development Today The Purpose of Testing Manual vs Automatic Testing White-box vs Black-box Acceptance Testing Regression

  • View

  • Download

Embed Size (px)

Text of Software Testing - University of Southern California Software Development Today The Purpose of...

  • Software TestingCelia Chen, Spring 2016

  • AgendaSoftware Development Today

    The Purpose of Testing

    Manual vs Automatic Testing

    White-box vs Black-box

    Acceptance Testing

    Regression Testing

    Performance Testing

  • This is how todays lecture going to feel like...

  • Software Development Today

    Decision Maker

    Programmer Tester

  • Scenario 1

    Decision Maker

    Programmer Tester

    OK, calm down. Well slip the schedule. Try again.

    Im done. It doesnt #$%& compile!

  • Scenario 2

    Decision Maker

    Programmer Tester

    Now remember, were all in this together. Try again.

    Im done. It doesnt install!

  • Scenario 3

    Decision Maker

    Programmer Tester

    Lets have a meeting to straighten out the spec.

    No, half of your tests are wrong!

    It does the wrong thing in half the tests.

    Im done.

  • Scenario 4

    Decision Maker

    Programmer Tester

    Try again, but please hurry up!

    Im done.It still fails some tests we agreed on.

  • The Ariane 5 Rocket Disaster The extensive reviews and tests carried out during the Ariane 5 Development Programme did not include adequate analysis and testing of the inertial reference system or of the complete flight control system, which could have detected the potential failure.

  • Why do we test?2 main reasons:

    - Find bugs- Find important bugs

    - Elucidate the specification

  • Version 1 Check that Ms. Browns # of children is one more

    Version 2 Also check Mr. Browns # of children

    Version 3 Check that no one elses child counts changed

    Test Case - Add a child to Mary Browns record

  • Specifications Good testers clarify the specification

    This is creative, hard work

    There is no hope tools will automate this This part will stay hard work

  • Development Phases and Testing

  • Testing Activity

  • Testing Activity

  • Types of Testing

  • Manual Testing Test cases are lists of instructions test scripts Someone manually executes the script Do each action, step-by-step

    Click on login Enter username and password Click OK And manually records results

    Low-tech, simple to implement

  • Manual Testing Manual testing is very widespread

    Maybe not dominant, but very, very common

    Why? Because some tests cant be automated

    Usability testing Some tests shouldnt be automated

    Not worth the cost

  • Automated Testing Idea:

    Record manual test Playback on demand

    Why? Find bugs more quickly Conserve resources

    No need to write tests If software changes, no need to maintain tests No need for testers?!

    This doesnt work as well as expected Some tests cant/shouldnt be automated It is hard to do Impossible for whole system

  • Fragility Test recording is usually very fragile

    Breaks if environment changes anything E.g., location, background color of textbox

    More generally automation tools cannot generalize They literally record exactly what happened If anything changes, the test breaks

    A hidden strength of manual testing Because people are doing the tests, ability to adapt tests to slightly modified situations is built-


  • Breaking Tests When code evolves, tests break

    E.g., change the name of a dialog box Any test that depends on the name of that box breaks

    Maintaining tests is a lot of work Broken tests must be fixed; this is expensive Cost is proportional to the number of tests Implies that more tests is not necessarily better

  • How can we do better? Recorded tests are too low level

    E.g., every test contains the name of the dialog box

    Need to abstract tests Replace dialog box string by variable name X Variable name X is maintained in one place

    So that when the dialog box name changes, only X needs to be updated and all the tests work again

  • Why is this test case not automatable?

  • How to decide what to automate? Are the test cases repeatable? Are the test case steps easy to follow? Are the expected results consistent? Are the expected result easily verified? Is it hard to develop? Does it save a lot of time from doing it manually? Are the test cases used frequently? ...

  • Two General Approaches to Testing

  • Black-Box Testing Black-box criteria do not consider the implemented control structure and focus

    on the domain of the input data In the presence of formal specifications, it can be automated In general, it is a human-intensive activity Different black-box criteria

    Category partition method State-based techniques Combinatorial approach Catalog based techniques ...

  • White-Box Testing Selection of test suite is based on some structural elements in the code. Assumption: Executing the faulty element is a necessary condition for

    revealing a fault Example white-box testing criteria:

    Control flow (statement, branch, basis path, path) Condition (simple, multiple) Loop Dataflow (all-uses, all-du-paths)

  • Black-box vs. White-box: Example 1 Specification: function that inputs an integer param and returns half the value of param if param is even, param otherwise.

    Function foo works correctly only for even integers The fault may be missed by white-box testing The fault would be easily revealed by black-box testing

  • Black-box vs. White-box: Example 2 Specification: function that inputs an integer and prints it

    Function foo contains a typo From the black-box perspective, integers < 1024 and integers > 1024 are

    equivalent, but they are treated differently in the code The fault may be missed by black-box testing The fault would be easily revealed by white-box testing

  • Acceptance Test

    Goals: To confirm that the system meets the agreed upon requirements To identify and resolve any conflicts To determine the readiness of the system for live operations


    A [named user role] can [select/operate] [feature/function] so that [output] is [visible/complete/etc.] and leads to a Yes/No decision for user and developer

  • Acceptance Test v.s. Functional TestFunctional testing: This is a verification activity.

    Did we build a correctly working product? Does the software meet the business requirements?

    Acceptance testing: This is a validation activity.

    Did we build the right thing? Is this what the customer really needs?

  • Regression TestProblem Addressed: After software is changed, what is the best way to test it?

    Regression Testing:

    What features to test? Selection of test cases Prioritization of test cases Augmentation of test suites for new features

  • Re-testing v.s. Regression TestingRe-testing:

    testing the functionality or bug again to ensure the code is fixed. If it is not

    fixed, defect needs to be re-opened. If fixed, defect is closed.

    Regression testing:

    testing your software application when it undergoes a code change to ensure

    that the new code has not affected other parts of the software.

  • Performance Test- Quality Assurance- Focus of PT:

    - Speed - Determines whether the application responds quickly

    - Scalability - Determines maximum user load the software application can handle.

    - Stability - Determines if the application is stable under varying loads

    - Reliability - Determines if the application is reliable under varying situations

  • The software shall have no more than X bugs/1K LOC.

    How do we measure bugs at delivery time?

  • Debugging Process - based on a Monte Carlo technique for statistical analysis of random events. 1. before testing, a known number of bugs (seeded bugs) are secretly inserted. 2. estimate the number of bugs in the system 3. remove (both known and new) bugs. # of detected seeded bugs/ # of seeded bugs = # of detected bugs/ # of bugs in the system # of bugs in the system = # of seeded bugs x # of detected bugs /# of detected seeded bugs

  • Example: secretly seed 10 bugs an independent test team detects 120 bugs (6 for the seeded) # of bugs in the system = 10 x 120/6 = 200 # of bugs in the system after removal = 200 - 120 - 4 = 76

    But, deadly bugs vs. insignificant ones; not all bugs are equally detectable; ( Suggestion [Musa87]: "No more than X bugs/1K LOC may be detected during testing" "No more than X bugs/1K LOC may be remain after delivery, as calculated by the Monte Carlo seeding technique"