34
TestIstanbul Conferences 2013 TestIstanbul 2013 Conference “Future Of Testing: New Techniques and Methodologies” A Systematic Approach for Increasing the ROI of Software Test Automation Vahid Garousi, Associate Professor Senior Software Consultant

TestIstanbul Conferences 2013 TestIstanbul 2013 Conference “Future Of Testing: New Techniques and Methodologies” A Systematic Approach for Increasing the

Embed Size (px)

Citation preview

TestIstanbul Conferences 2013

TestIstanbul 2013 Conference“Future Of Testing: New Techniques and Methodologies”

A Systematic Approach for Increasing the ROI of Software Test Automation

Vahid Garousi, Associate ProfessorSenior Software Consultant

TestIstanbul Conference 2013

Background

• Many test teams adapt software test automation• But unfortunately, often times, the automation experience

ends with negative outcomes• e.g., it becomes costly to maintain automated test suites, in

parallel to changes in the production code• Excited then disappointed…!• Return on Investment (ROI) from test automation• Reason: following improper test automation strategies• Result: In such cases, project managers decide to ignore

automated test suites in their entirety and decide to not use test automation again.

TestIstanbul Conference 2013

Objective

• Goal: to systematically answer the following questions:– When to automate (test cases)?– What to automate? (test cases, methods, class, etc.)

– So that the ROI of test automation is increased

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved• Q/A

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved• Q/A

TestIstanbul Conference 2013

Overview of automation across the software test process

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

• Test automation does NOT mean 100% automation in all testing tasks!

• Let’s review the different tasks in SW testing:

TestIstanbul Conference 2013

Automation in Test-case Design

• Test-case Design (Criteria-based): • Design test input values to satisfy

coverage criteria (e.g., line coverage, or requirements coverage/traceability)

• Usually done manually, but can be automated (combinatorial test tools)

• Test-case Design (Human knowledge-based):

• Design input test values based on domain knowledge of the program and human knowledge of testing

• Also called exploratory testing

• Almost always done manually. We can hardly develop a program to document domain knowledge (example: power engineers)

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

TestIstanbul Conference 2013

Automation in Test Scripting

• Test Scripting:• Writing the test cases either for

manual or automated test execution

• Can be done manually or automated

• Do you remember (have!) thick documents of manual test scripts (hard or soft copy)

• Can we have tools to auto generate automated test scripts?! (yes, there are some!)

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

TestIstanbul Conference 2013

Automation in Test Execution

• Test Execution: • The common interpretation of

“automated testing”!

• Run tests on the software and record the results

• Can be done manually or automated

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

TestIstanbul Conference 2013

Automation in Test Evaluation and Result Reporting

• Test Evaluation (oracle):

• Evaluate results of testing (pass/fail), a.k.a. test verdict, and report results to developers

• Again, can be done manually or automated

• Test Result Reporting:

• Again, can be done manually or automated

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

TestIstanbul Conference 2013

The big picture!

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

• There are often inter-dependencies among the following decisions

Leads to… Leads

to…

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved

TestIstanbul Conference 2013

Successful Test Automation w.r.t. ROI

• Classical chart on ROI:

Upfront cost of automation(development of automated test suite, etc).

Payoff point (“sweat spot”!)

Only if the decision to automate (and how much of it) has been made properly, then we will see this cost saving!

TestIstanbul Conference 2013

The “Ugly Side” of Test Automation

• To rush into automation just for the sake of it…

Upfront cost of automation(development of automated test suite, etc).

No payoff point ??

Costs to maintain the test suite is growing, since an IMPROPER test automation strategy was followed

Project manager: Let’s throw out the automated test suites and never use test automation!

TestIstanbul Conference 2013

ROI Experiences in Test Automation: Comparison

• “The Good, the Bad and the Ugly”

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved

TestIstanbul Conference 2013

When is test automation has the highest ROI? (i.e., “worth it” in terms of its business value)

• Carefully select which of the following activities should be automated (and how much, it is not Boolean!, but rather a fuzzy decision), for which parts of the system, and which test activities should stay manual… so that the sum of all testing costs are the least expensive possible in the entire SW life-cycle

• i.e., choosing the “numbers” (knobs) on the following diagram

• We are not advocating 100% pure end-to-end automation!

• One good decision for a hypothetical team:

Test-case Design

Test Scripting

Test Execution

Test Evaluation

Pass

Fail

Test Suites (set of test

cases)

Test Results

Bug (Defect) Reports

Scripted test suites

“Exercise” (test)

Manual test suites

Automated test suites (e.g., Junit)

AM

Criteria-based (Systematic)

Human knowledge-based

(Exploratory)

Computer(Automated)

Human(Manual)

M

OR/AND

AM

System Under Test (SUT)

AM

AM

Activity

Data/ Entity

Legend

Test-Result

Reporting

20%

80%

40%60%

80%

20%

TestIstanbul Conference 2013

Best Mix of Manual/Automated Testing

• We need both, but how much of each for each type of testing (acceptance, performance, etc.), system-level use-case, class, component, sub-system, etc. (many levels of granularity)

• A kind of optimization problem…

• Example test activity: test scripting … (development) of manual / automated test suites

• Let’s start with some heuristics:

• Volatility factor: If we expect a lot of change for a part of system (class, use-case, etc.), and thus the need for re-testing, we should thinking of automated testing for it

• Cost factor: If manual testing of a part of system is COSTLY, then automate it.

TestIstanbul Conference 2013

Choosing when and what to automate

• Reminder: Carefully select – which of the following activities should be automated (and how

much, it is not Boolean!), – for which part of the system, – and which activities should stay manual… – so that the testing activities are the least expensive possible in the

“long-run”

Expected volatility: Low

Expected volatility: High

Cost of manual testing: Low

Cost of manual testing: High

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved

TestIstanbul Conference 2013

Our experience

• Our experience in measuring ROI and successful test automation

• A selected project:• Automated Unit and Functional Testing of a SCADA Software

(2009-2012)• SCADA: Supervisory Control and Data Acquisition system

TestIstanbul Conference 2013

Automated unit and functional testing of an in-house developed SCADA software

• Software under test: A commercial large-scale Supervisory Control and Data Acquisition (SCADA) software system

• Named Rocket, developed by based in Calgary, Canada

• Has been developed using Microsoft Visual Studio C#

• Has now been deployed in several locations across Canada and the US

TestIstanbul Conference 2013

Black-box Unit Testing (BBUT): Challenges

• If we apply the equivalence classing, we will get 19,683 test cases for only this function block. Bad news ;(

• Challenge 1: Coding of test cases (in NUnit): Too much effort• Challenge 2: Coupling of test cases to test input data • Challenge 3: Manual generation of expected outputs (test oracle)

TestIstanbul Conference 2013

Black-box Unit Testing (BBUT): Challenges

• One possible solution → Automated generation of NUnit test code

• There are some tools out there:• Microsoft Pex, JML-JUnit, JUB (JUnit test case Builder),

TestGen4J, JCrasher, NModel

• After evaluating them for our purpose, unfortunately none was suitable (details in our article)

• Decision: to implement our own test tool!

TestIstanbul Conference 2013

AutoBBUT - Example UsageReminder: Test code is automatically generated, saving many hours

TestIstanbul Conference 2013

Effectiveness in Detecting Defects

•Most of the defects were obviously not caught by manual testing conducted by various developers and users during the development process.

•There were mostly on the boundary value conditions, overflow cases and invalid inputs, denoting the importance of robustness testing in such applications.

TestIstanbul Conference 2013

Return On Investment (ROI) of Test Automation

•Cost and benefit drivers in the project:

Cost Drivers Benefit DriversCD 1: Development of the AutoBBUT tool

BD 1: Time saving and test repeatability – No need any more to conduct manual regression testing

CD 2: Maintenance of automated unit test suite

BD 2: Time saving - No need to manually develop the test code and some of the test oracles (expected outputs)

CD 3: Manual coding of some of the expected outputs (test oracles) in test code

BD 3: Detection of faults undetected in manual testing iterations

TestIstanbul Conference 2013

In-depth Analysis of the ROI of Test Automation

•Careful measurements were conducted•Summary: We could get a time saving of:

ROI = -120 hours (AutoBBUT’s development time)-3 hours (test code inspection and completion)+87 hours (initial development of manual test suite, if it was to be done)+87*6 hours (test code maintenance, since the system evolved 6 times)ROI = 486 hours = 60 man-days

•More details in: our recent article in the IEEE International Conference on Software Testing, Verification and Validation (ICST), Industry Track, April 2012

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved

TestIstanbul Conference 2013

End-to-end Activities of Manual/Automated Testing and the Costs Involved

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

TestIstanbul Conference 2013

End-to-end Activities of Manual/Automated Testing and the Costs Involved

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

TestIstanbul Conference 2013

End-to-end Activities of Manual/Automated Testing and the Costs Involved

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

TestIstanbul Conference 2013

End-to-end Activities of Manual/Automated Testing and the Costs Involved

Test Scripting

Test Execution

Test scripts

Maintenance

Test Results

Test Developer(Automated Testing)

Developer

SUT Artifact

Need for Maintenance

triggers

changes

triggers

Co-maintenance, a.k.a. Test Repair

changesFault detection effectiveness?

FaultFault

Localizationtriggers

Failure

Pass

Cost-incurring activity (effort)

Activity/data providing benefit

Manual Tester

Manual Test

Execution

Self knowledge

Manual test script

Test Scripting

Exploratory testing

triggers

triggers

Fault detection effectiveness?

Co-maintenance

triggers

Test-code Refactoring, a.k.a. perfective

maintenance

improves

tradeoff in cost

Test-case Design

Test-case Design

Test suite

Test suite

Triggers(need for new

test cases)

triggers

uses

uses

•To increase the ROI of test automation, we need to systematically decrease the costs in each activity, e.g.:

•Benefiting from “test patterns” in development of test code to increase its maintainability

TestIstanbul Conference 2013

Outline of the Presentation

• A brief overview of automation across the software test process

• Successful vs. unsuccessful Test Automation• When is test automation has the highest ROI?• Choosing when and what to automate• Experience from one of our projects in measuring ROI and

Successful Test Automation• End-to-end Activities of Manual/Automated Testing and the

Costs Involved• Q/A