90
TB Full Day Tutorial 10/14/2014 8:30:00 AM "Successful Test Automation: A Manager’s View" Presented by: Mark Fewster Grove Consultants Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ [email protected] www.sqe.com

Successful Test Automation: A Manager’s View

Embed Size (px)

Citation preview

TB Full Day Tutorial

10/14/2014 8:30:00 AM

"Successful Test Automation: A

Manager’s View"

Presented by:

Mark Fewster

Grove Consultants

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073

888-268-8770 ∙ 904-278-0524 ∙ [email protected] ∙ www.sqe.com

Mark Fewster

Grove Software Testing Ltd. Mark Fewster has more than thirty years of experience in software testing ranging from test management to test techniques and test automation. For the past two decades, Mark has provided consultancy and training in software testing, published papers, and co-authoredSoftware Test Automation and Experiences of Test Automation with Dorothy Graham. A popular speaker at conferences worldwide, Mark has won the Mercury BTO Innovation in Quality Award. He is currently helping ISTQB define the expert level certification for test automation. Speaker Presentations

Written by Grove Software Testing Ltd. www.grove.co.uk Version 3_1 © Grove Software Testing, 2014

StarWest 2014

Managing Successful Test Automation A one-day tutorial

Managing Successful Test Automation

Contents Session 0: Introduction to the tutorial Objectives, What we cover (and don’t cover) today

Session 1: Planning and Managing Test Automation Test automation objectives (and exercise) Responsibilities Pilot project Measures for automation Return on Investment (ROI) (and exercise)

Session 2: Testware Architecture Importance of a testware architecture What needs to be organised

Session 3: Pre- and Post-Processing Automating more than tests Test status

Session 4: Scripting Techniques Objectives of scripting techniques Different types of scripts Domain specific test language

Session 5: Automated Comparison Automated test verification Test sensitivity Comparison example

Session 6: Final Advice, Q&A and Direction Strategy exercise Final advice Questions and Answers

Abstract Many organisations have invested a lot of time and effort into test automation but they have not achieved the significant returns that they had expected. Some blame the tool that they use while others conclude test automation doesn't work well for their situation. The truth is often very different. These organisations are typically doing many of the right things but they are not addressing key issues that are vital to long term success with test automation. Mark Fewster describes the most important issues that you must address, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. Management issues including responsibilities, automation objectives and return on investment are covered along with technical issues such as testware architecture, pre- and post-processing and automated comparison techniques. The target audience for this tutorial is people involved with managing test automation who need to understand the key issues in making test automation successful. Technical issues are covered at a high level of understanding; there are no tool demos! Biography Mark has over 30 years of industrial experience in software testing ranging from test management to test techniques and test automation. In the last two decades Mark has provided consultancy and training in software testing, published papers and co-authored two books with Dorothy Graham, "Software Test Automation” and “Experiences of Test Automation”. He is a popular speaker at national and international conferences and seminars, and has won the Mercury BTO Innovation in Quality Award. Mark has served on the committee of the British Computer Society's Specialist Interest Group in Software Testing (BCS SIGiST) and is currently helping ISTQB in defining the expert level certification for test automation.

presented by Mark [email protected] © Mark Fewster & Dorothy Graham 2014

0-1

Managing Successful Test Automation

Prepared by

Dorothy Graham [email protected]

© Mark Fewster and Dorothy Graham 2014

Mark FewsterGrove Software Testing Ltd.

[email protected]

and

0.1

Objectives of this tutorial

● help you achieve better success in automation■ independent of any particular tool

● mainly management and some technical issues■ objectives for automation■ showing Return on Investment (ROI)■ importance of testware architecture■ practical tips for a few technical issues■ what works in practice (case studies)

● help you plan an effective automation strategy0.2

presented by Mark [email protected] © Mark Fewster & Dorothy Graham 2014

0-2

Tutorial contents1) planning & managing test automation2) testware architecture3) pre and post processing4) scripting techniques5) automated comparison6) final advice, Q&A and direction0.3

Shameless commercial plug

Part 1: How to do automation - still relevant today, though we plan to update it at some point!

Latest book

(2012)

testautomationpatterns.org0.4

presented by Mark [email protected] © Mark Fewster & Dorothy Graham 2014

0-3

What is today about? (and not about)

● test execution automation (not other tools)● We will NOT cover:

■ demos of tools (time, which one, expo)■ comparative tool info (expo, web)■ selecting a tool*

● at the end of the day■ understand technical and non-technical issues■ have your own automation objectives■ plan your own automation strategy

* Mark will email you Chapter 10 of the STA book on request – [email protected]

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Planning & Managing Test Automation

Managing Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice5 Comparison

1.1

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.2

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

An automation effort

● is a project (getting started or major changes)■ with goals, responsibilities, and monitoring■ but not just a project – ongoing effort is needed

● not just one effort – continuing ■ when acquiring a tool – pilot project■ when anticipated benefits have not materialized■ different projects at different times

►with different objectives● objectives are important for automation efforts

■ where are we going? are we getting there?1.3

fasttesting

slowtesting

Effectiveness

Low

High

EfficiencyManual testing Automated

Efficiency and effectiveness

poorfast

testing

poorslow

testing

goodgood

greatestbenefit

not good but commonworst

better

1.4

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Common test automation objectives

● choose your most important objective:1. faster testing2. run more tests3. reduce testing costs4. automate x% of tests5. find more bugs6. other: _______________________________

cheaper testing, less effort, fewer testers

more testing, more oftenincrease test coverage

reduce elapsed time, shorten schedule, time to market

better testing,improve software quality

automate 100% of testing

Exercise

1.5

Same testsautomated

edit tests (maintenance) set-up execute

analysefailures clear-up

Manualtesting

More matureautomation

Faster testing?

1.6

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Run more tests?

● which is better:■ 100 1-minute tests or 1 60-minute test?

● not the only aspect:■ failure analysis may be much longer for the 60-minute test!

25% 25% 25% 25%

Setup Com Uniq TD100 1-min tests:unique testing = 15 sec x 100 = 25 mins

25% 50% 25%

Setup Unique TD1 60-min test:unique testing = 50%= 30 mins +

1.7

Run more tests?

● 3 sets of tests■ A: 100 tests, easy to automate■ B: 60 tests, moderately difficult to automate■ C: 30 tests, hard to automate

● what if the next release leaves A unchanged but has major changes to B and C?“Good automation is not found in the

number of tests run but in thevalue of the tests that are run”

1.8

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Reduce testing costs?

Cost

Time (months / years)

Automationeffort

Total effort

Yes, but not to zero!

1.9

Manual testing effort(without automation)

Manual testing effort(with automation)

Automate x% of the manual tests?

manualtests automated

tests

tests notworth

automatingexploratory

test automation

manual tests automated (% manual)

tests (& verification) not possible to do

manually

tests notautomated

yet

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

What is automated?

regression tests exploratory testing

likelihood offinding bugs

most oftenautomated

1.11

Find more bugs?

● tests find bugs, not automation● automation is a mechanism for running tests● the bug-finding ability of a test is not affected by the manner in which it is executed● this can be a dangerous objective

■ especially for regression automation!Automated tests Manual Scripted Exploratory Fix Verification

9.3% 24.0% 58.2% 8.4%

Experiences of Test Automation, Ch 27, p 503, Ed Allen & Brian Newman 1.12

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

When is “find more bugs” a good objective for automation?

● objective is “fewer regression bugs missed”● when the first run of a given test is automated

■ MBT, Exploratory test automation, automated test design■ keyword-driven (e.g. users populate spreadsheet)

● find bugs in parts we wouldn’t have tested?■ indirect! (direct result of running more tests)

1.13

Good objectives for test automation

● realistic and achievable● short and long term● regularly re-visited and revised● measurable● should be different objectives for testing and for automation● automation should support testing activities

Pattern: SET CLEAR GOALS1.14

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Trying to get started: Tessa Benzie

■ consultancy to start automation effort►project, needs a champion – hired someone► training first, something else next, etc.

■ contract test manager – more consultancy►bought a tool – now used by a couple contractors►TM moved on, new QA manager has other priorities

■ just wanting to do it isn’t enough►needs dedicated effort►now have “football teams” of manual testers

Chapter 29, pp 535, Experiences of Test Automation 1.15

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.16

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

What is an automated test?

● a test!■ designed by a tester for a purpose

● test is executed■ implemented / constructed to run automatically using a tool■ or run manually

● who decides which tests to run?● who decides how a test is run?

1.17

Test manager’s dilemma

● who should undertake automation work■ not all testers can automate (well)■ not all testers want to automate■ not all automators want to test!

● conflict of responsibilities ■ (if you are both tester and automator)■ should I automate tests or run tests manually?

● get additional resources as automators?■ contractors? borrow a developer? tool vendor?

1.18

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Relationships

19

engine:test tool

passengers:test cases

car: testinfrastructure

driver:tester

mechanic:test automator

Testers

● test the software■ design tests■ select tests for automation

► requires planning / negotiation● execute automated tests

■ should not need detailed technical expertise● analyse failed automated tests

■ report bugs found by tests■ problems with the tests may need help from the automation team

Automators

● automate tests (requested by testers)● support automated testing

■ allow testers to execute tests■ help testers debug failed tests■ provide additional tools

● predict■ maintenance effort for software changes■ cost of automating new tests

● improve the automation■ more benefits, less cost

Responsibilities

1.20

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Good testing

● is effective■ finds most of the faults (90+%)■ gives confidence

● is efficient■ uses a few small test cases

● is flexible■ can use different subsets of test cases for different test objectives

Good automation

● easy to use■ flexible: supports different requirements for automation■ responsive: quick changes when needed

● cheap to use■ build & maintain automated tests■ failure analysis

● improve over time1.21

Testing versus automation

Roles for automation

● Testware architect■ designs the overall structure for the automation

● Champion■ “sells” automation to managers and testers

● Tool specialist / toolsmith■ technical aspects, licensing, updates to the tool

● Automated script developers■ write new scripts as needed (e.g. keyword)■ debug automation problems

1.22

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.23

Agile automation: Lisa Crispin

■ starting point: buggy code, new functionality needed,whole team regression tests manually■ testable architecture: (open source tools)

►want unit tests automated (TDD), start with new code► start with GUI smoke tests - regression►business logic in middle level with FitNesse

■ 100% regression tests automated in one year► selected set of smoke tests for coverage of stories

■ every 6 mos, engineering sprint on the automation■ key success factors

►management support & communication►whole team approach, celebration & refactoring

1.24Chapter 1, pp 17-32, Experiences of Test Automation

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Automation and agile

● agile automation: apply agile principles to automation■ multidisciplinary team■ automation sprints■ refactor when needed

● fitting automation into agile development■ ideal: automation is part of “done” for each sprint

►Test-Driven Design = write and automate tests first■ alternative: automation in the following sprint ->

►may be better for system level tests1.25See www.satisfice.com/articles/agileauto-paper.pdf (James Bach)

Automation in agile/iterative development

1.26

Amanual testing of

this release (testers)A B

B CA

FEDCBA

regression testing (automators automate the best tests)

run automated tests (testers)

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Requirements for agile test framework

● support manual and automated testing ■ using the same test construction process

● support fully manual execution at any time■ requires good naming convention for components

● support manual + automated execution ■ so test can be used before it is 100% automated

● implement reusable objects● allow “stubbing” objects before GUI available

1.27Source: Dave Martin, LDSChurch.org, email

A tale of two projects: Ane Clausen

■ Project 1: 5 people part-time, within test group►no objectives, no standards, no experience, unstable►after 6 months was closed down

■ Project 2: 3 people full time, 3-month pilot►worked on two (easy) insurance products, end to end►1st month: learn and plan, 2nd & 3rd months: implement► started with simple, stable, positive tests, easy to do► close cooperation with business, developers, delivery►weekly delivery of automated Business Process Tests

■ after 6 months, automated all insurance products1.28Chapter 6, pp 105-128, Experiences of Test Automation

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.29

Pilot project

● reasons■ you’re unique■ many variables / unknowns / options■ learn how to start / improve

● benefits■ find the best way for you■ solve problems once■ establish confidence (based on experience)■ set realistic targets

● objectives■ demonstrate tool value■ gain experience / skills in the use of the tool■ identify changes to existing test process■ set internal standards and conventions■ refine assessment of costs and achievable benefits

1.30

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Characteristics of a pilot project

1.31

Planned

Important

Learning

Objective

Timely

resourced, targets, contingency

full time work, worthwhile tests

informative, useful, revealing

quantified, not subjective

short term, focused

P

I

L

O

T

What to explore in the pilot

● build / implement automated tests (architecture)■ different ways to build stable tests (e.g. 10 – 20)

● maintenance■ different versions of the application■ reduce maintenance for most likely changes

● failure analysis■ support for identifying bugs■ coping with common bugs affecting many automated tests

1.32Also: naming conventions, reporting results, measurement

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

After the pilot…

● having processes & standards is only the start■ 30% on new process■ 70% on deployment

►marketing, training, coaching► feedback, focus groups, sharing what’s been done

● the (psychological) Change Equation■ change only happens if (x + y + z) > w

1.33

Source: Eric Van Veenendaal, successful test process improvement

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.34

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Why measure automation?

● to justify and confirm starting automation■ business case for purchase/investment decision, to confirm ROI has been achieved e.g. after pilot■ both compare manual vs automated testing

● to monitor on-going automation “health”■ for increased efficiency, continuous improvement■ build time, maintenance time, failure analysis time, refactoring time

►on-going costs – what are the benefits?■ monitor your automation objectives

1.35

Useful measures

● a useful measure:“supports effective analysis and decision making, and that can be obtained relatively easily.”Bill Hetzel, “Making SoftwareMeasurement Work”, QED, 1993.● easy measures may be more useful even though less accurate (e.g. car fuel economy)● ‘useful’ depends on objectives, i.e. what you want to know

1.36

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

EMTE – what is it?

● Equivalent Manual Test Effort■ given a set of automated tests,■ EMTE is how much effort would it take

► to run those tests manually● note

■ you would not actually run these tests manually■ EMTE = is the effort you would have spent if you had run the tests manually■ EMTE can be used to show some test automation benefit

1.37

Monitoring test automation health

● important to do (often neglected)■ need to distinguish between test automation progress and test automation health

● progress examples■ number of tests automated■ coverage achieved by automated tests■ number of test cycles executed / release

● health examples■ benefits■ build cost■ analysis cost■ maintenance cost 1.38

ROI = benefit – costcost

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Internal objectives for test automation

● provide automation services■ efficient building, use and maintenance of automated tests■ decrease automation costs over time■ increase automation benefits

► savings, ease of use, flexibility● measured ROI (appropriate for test objectives)

■ Equivalent Manual Test Effort (EMTE)► additional test hours► hours of unattended tests performed

■ proportion of unattended testing■ increased coverage■ reduce elapsed time

1.39

Measure benefit

● equivalent manual test effort (EMTE)■ hours of additional testing■ hours of unattended testing

● number of tests■ tests executed■ additional (new) tests■ repeated tests

● number of test cycles■ additional cycles

● increased coverage1.40

Relate target to total cost of automation, e.g. benefit 10 times total cost

Suggestion

the degree to which automation has supportedtesters in achieving their objectives

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

An example comparative benefits chart

1.41

01020304050607080

exec speed times run data variety tester work

manaut

ROI spreadsheet – email me for a copy

14 x faster 5 x more often 4 x more data 12 x less effort

Measure build effort

● time taken to automate tests■ hours to add new or existing manual tests■ average across different test types

● proportion of equivalent manual test effort■ e.g. 1 hour to automate 30 minute manual test= 2 times equivalent manual test effort

1.42

Target: < 2 times:decreasing 10% per year

Suggestion

put your own(more appropriate)figures here

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Measure failure analysis effort

● analysis effort for each test■ captured in defect report■ effort from first recognition through to resumption of test execution■ average hours (or minutes) per failed test case

►needs comparison of same with manual testing►must also monitor defect reporting effectiveness

– e.g. how many ‘non reproducible’ reports

1.43

Target: X minutes?Trend:stable

Suggestionput your own(more appropriate)figure here

Measure maintenance effort

● maintenance effort of automated tests■ percentage of test cases requiring maintenance■ average effort per test case■ percentage of equivalent manual test effort

1.44

Target: < 10%Trend:stable or decreasing

Suggestion

put your own( appropriate)figure here

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Recommendations

● don’t measure everything!● choose three or four measures

■ applicable to your most important objectives● monitor for a few months

■ see what you learn● change measures if they don’t give useful information

1.45

Managing Successful Test Automation

Contents

Managing

1 2 3

4 5 6

Test automation objectivesResponsibilitiesAutomation in agilePilot projectMeasures for automationReturn on Investment (ROI)1.46

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Is this Return on Investment (ROI)?

● tests are run more often● tests take less time to run● it takes less human effort to run tests● we can test (cover) more of the system● we can run the equivalent of days / weeks of manual testing in a few minutes / hours● faster time to market

1.47

these are (good) benefitsbut are not ROIROI = (benefit – cost)

cost

Examples of ROI achieved

● Michael Snyman, S African bank (Ch 29.13)■ US$4m on testing project, automation $850K■ savings $8m, ROI 900%

● Henri van de Scheur, Database testing (Ch 2)■ results: 2400 times more efficient

● Stefan Mohacsi, Armin Beer: European Space Agency (Ch 9)■ MBT, break even after four test cycles

1.48

from: Experiences of Test Automation book

1-Managing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

How important is ROI?

● ROI can be dangerous■ easiest way to measure: tester time■ may give impression that tools replace people

● “automation is an enabler for success, not a cost reduction tool”►Yoram Mizrachi, “Planning a mobile test automation strategy that works, ATI magazine, July 2012

● many achieve lasting success without measuring ROI (depends on your context)■ need to be measure benefits (and publicize them)

1.49

Managing Successful Test Automation

Managing

1 2 3

4 5 6Note in your Summary Sheet the key points for you from this session

Summary: key points

1.50

• Assign responsibility for automation (and testing)• Use a pilot project to explore the good methods• Know your automation objectives• Measure what’s important to you• Show ROI from automation Now complete the exercise!

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Testware Architecture

Managing Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

Ref. Chapter 5: Testware Architecture“Software Test Automation”

4 Scripting 6 Advice5 Comparison

2.1

Managing Successful Test Automation

Contents

Architecture

1 2 3

4 5 6

Importance of a testware architectureWhat needs to be organised

2.2

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Testware architecture

● organisation of, and relationship between, artefacts■ scripts, input, test data, test descriptions, expected results, actual results, log files, etc.

● if you create an automated test:■ do you know what already exists that you can use?

► scripts, data, tools■ do you know where to put the artefacts?

► script(s), data, expected results, etc.■ do you know what names to use?

● if you execute an automated test:■ do you know where to find all the test results?■ do you know how to analyse a test failure? 2.3

Testware architecture

test

war

e ar

chite

ctur

e

Testers

Test Execution Tool

High Level Keywords

Structured Scripts

structuredtestware

TestAutomator(s)

write tests (in DSTL)

runs scripts

abstraction here= easier to change tools and maintain

= long life

abstraction here= easier to write automated tests

= widely used

(tes

t fra

mew

ork)

2.4

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Architecture – abstraction levels

● most critical factor for success■ worst: close ties between scripts, tool & tester

● separate testers’ view from technical aspects■ so testers don’t need tool knowledge

► for widespread use of automation► scripting techniques address this

● separate tests from the tool – modular design■ likely changes confined to one / few module(s)■ re-use of automation functions■ for minimal maintenance and long-lived automation

2.5

Localised regimes

● “everyone will do the sensible thing”■ most will do something sensible, but different

● “use the tool however it best suits you”■ ignores cost of learning how best to automate

● problems include:■ effort wasted repeatedly solving the same problems in different ways■ no re-use between teams■ multiple learning curves

2.6

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Easy way out: use the tool’s architecture

● tool will have its own way of organising tests■ where to put things (for the convenience of the tool!)■ will “lock you in” to that tool – good for vendors!

● a better way (gives independence from tools)■ organise your tests to suit you■ as part of pre-processing, copy files to where the tool needs (expects) to find them■ as part of post-processing, copy back to where you want things to live

2.7

Tool-specific vs generic scripts

A

Tool A

A

A

A

B

Tool B

B

B

B

Tool A

A

G

G

G

Tool B

B

New env

2.8

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Test-specific vs reused scripts

T1

Test 1

T1

T1

T2

Test 2

T2

T2

T1

Test 1

R

R

T2

Test 2 Test 1 Test 2

R

R

Test Definition

2.9

TESTNAME: <name of test>

PURPOSE: <single sentence explaining test purpose>

MATERIALS:<a list of the artefacts used by this test>

RESULTS:<a list of the artefacts produced by this test>

SETUP:<sequence of keywords implementing setup actions>

TEARDOWN:<sequence of keywords implementing teardown actions>

EXECUTION:<sequence of keywords implementing the test actions>

Test definition

2.10

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Test definition additional info

● measures■ expected run time (of the automated test)■ EMTE (equivalent manual test effort)■ others?

● attributes, such as test selector tags■ tag specific sets of tests so they can be selected to be run■ examples: smoke tests, short tests, bug fix tests. long tests, specific environment tests

2.11

General control script

For each test to be executed

Read keyword definitionVerify keyword definition

Check specified materials existExecute setup

Execute test

Perform post-execution comparisonExecute teardownCheck specified results exist

Report results

EndFor

2.12

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Learning is incremental: Molly Mahai

■ book learning – knew about investment, not replace people, don’t automate everything, etc.■ set up good architecture? books not enough■ picked something to get started

►after a while, realised limitations► too many projects, library cumbersome

■ re-designed architecture, moved things around■ didn’t know what we needed till we experienced the problems for ourselves

► like trying to educate a teenager2.13Chapter 29, pp 527-528, Experiences of Test Automation

Managing Successful Test Automation

Contents

Architecture

1 2 3

4 5 6

Importance of a testware architectureWhat needs to be organised

2.14

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

A test for you

● show me one of your automated tests running■ how long will it take before it runs?

● typical problems■ fails: forgot a file, couldn’t find a called script■ can’t do it (yet):

► Joe knows how but he’s out, ►environment not right, ►haven’t run in a while, ►don’t know what files need to be set up for this script

● why not: run up your framework, select test, GO2.15

Key issues

● scale■ the number of scripts, data files, results files, benchmark files, etc. will be large and growing

● shared scripts and data■ efficient automation demands reuse of scripts and data through sharing, not multiple copies

● multiple versions■ as the software changes so too will some tests but the old tests may still be required

● multiple environments / platforms2.16

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Terms

- Testware artefacts

Testware

Products By-Products

Test Materials Test Results

logsstatus

summarydifferences

logsstatus

summarydifferences

actual results

scripts

data

inputs

expected results

doc (specifications)

env utilities

2.17

Sharedscript

saveas.scp

Sharedscript

open.scp

Scribble

Testware for example test case

diffs.txt

Differences

Test script:- test input

countries.scp

Log

log.txt

countries2.dcm

ExpectedOutput

TestSpecification

testspec.txt

countries.dcm

InitialDocument

countries2.dcm

EditedDocument

Compare

2.18

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Testware by type

Testware

countries2.dcm

Products By-Products

Test Materials Test Results

saveas.scp

countries.dcm

countries.scp

testdef.txt

open.scp

countries2.dcm

log.txt

diff.txt

status.txt

2.19

Benefits of standard approach

● tools can assume knowledge (architecture)■ they need less information; are easier to use; fewer errors will be made

● can automate many tasks■ checking (completeness, interdependencies); documentation (summaries, reports); browsing

● portability of tests■ between people, projects, organisations, etc.

● shorter learning curve2.20

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Testware Sets

● Testware Set:■ a collection of testware artifacts■ four types:

►Test Set - one or more test cases►Script Set - scripts used by two or more Test Sets►Data Set - data files used by two or more Test Sets►Utility Set - utilities used by two or more Test Sets

■ good software practice: look for what is common, and keep it in only one place!■ Keep your testware DRY!

2.21

Testware library

● repository of master versions of all Testware Sets■ “uncategorised scripts are worse than no scripts”

► Onaral & Turkmen● CM is critical

■ “If it takes too long to update your test library, automation introduces delay instead of adding efficiency” ► Linda Hayes, AST magazine, Sept 2010

2.22

d_ScribbleTypical v1d_ScribbleTypical v2d_ScribbleVolume v1s_Logging v1s_ScribbleDocument v1s_ScribbleDocument v2s_ScribbleDocument v3s_ScribbleNavigate v1t_ScribbleBreadth v1t_ScribbleCheck v1t_ScribbleFormat v1t_ScribbleList v1t_ScribbleList v2t_ScribbleList v3t_ScribblePrint v1t_ScribbleSave v1t_ScribbleTextEdit v1t_ScribbleTextEdit v2u_ScribbleFilters v1u_GeneralCompare v1

Versionnumbers

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Separate test results

Testsuite

A single test suite ...

Softwareunder test

... may be used on subtlydifferent versions of software ...

Testresults

... and producedifferent setsof test results ...

... that wewish to keep

2.23

Incremental approach: Ursula Friede

■ large insurance application■ first attempt failed

►no structure (architecture), data in scripts■ four phases (unplanned)

►parameterized (dates, claim numbers, etc)►parameters stored in single database for all scripts► improved error handling (non-fatal unexpected events)►automatic system restart

■ benefits: saved 200 man-days per test cycle►€120,000!

2.24Chapter 23, pp 437-445, Experiences of Test Automation

2-Architecture

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Architecture

1 2 3

4 5 6

Summary: key points

2.25

• Structure your automation testware to suit you• Testware comprises many files, etc. which need to be given a home• Use good software development standardsNote in your Summary Sheet the key points for you from this session

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Pre- and Post-Processing

Managing Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

6 Advice4 Scripting

Ref. Chapter 6: Automating Pre- and Post-Processing“Software Test Automation”

5 Comparison

3.1

Managing Successful Test Automation

Contents

Pre and Post

1 2 3

4 5 6

Automating more than testsTest statusContents

3.2

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

What is pre- and post-processing?

● Pre-processing■ automation of setup tasks necessary to fulfil test case prerequisites

● Post-processing■ automation of post-execution tasks necessary to complete verification and house-work

● These terms are useful because:■ there are lots of tasks, they come in packs, many are the same, and they can be easily automated

3.3

Automated tests/automated testing

Select / identify test cases to runSet-up test environment:

• create test environment• load test data

Repeat for each test case:• set-up test pre-requisites• execute• compare results• log results• analyse test failures• report defect(s)• clear-up after test case

Clear-up test environment:• delete unwanted data• save important data

Summarise results

Automated tests

Select / identify test cases to runSet-up test environment:

• create test environment• load test data

Repeat for each test case:• set-up test pre-requisites• execute• compare results• log results• clear-up after test case

Clear-up test environment:• delete unwanted data• save important data

Summarise resultsAnalyse test failuresReport defects

Automated testing

Automated processManual process 3.4

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Examples

● pre-processing■ copy scripts from a common script set (e.g. open, saveas)■ delete files that shouldn’t exist when test starts■ set up data in files■ copy files to where the tool expects to find them■ save normal default file and rename the test file to the default (for this test)

● post-processing■ copy results to where the comparison process expects to find them■ delete actual results if they match expected (or archive if required)■ rename a file back to its normal default

3.5

Outside the box: Jonathan Kohl

■ task automation (throw-away scripts)►entering data sets to 2 browsers (verify by watching)► install builds, copy test data

■ support manual exploratory testing■ testing under the GUI to the database (“side door”)■ don’t believe everything you see

►1000s of automated tests pass too quickly►monitoring tools to see what was happening► “if there’s no error message, it must be ok”

– defects didn’t make it to the test harness– overloaded system ignored data that was wrong

Chapter 19, pp 355-373, Experiences of Test Automation 3.6

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Automation +

execution

comparison

traditional test

automation

DSTLstructured

testwarearchitecture

loosen youroracles

ETA,monkeys

manual

testing

3.7

Managing Successful Test Automation

Contents

Pre and Post

1 2 3

4 5 6

Automating more than testsTest status

3.8

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Test status – pass or fail?

● tool cannot judge pass or fail■ only “match” or “no match”■ assumption: expected results are correct

● when a test fails (i.e. the software fails)■ need to analyse the failure

► true failure? write up bug report► test fault? fix the test (e.g. expected result)►known bug or failure affecting many automated tests?

– this can eat a lot of time in automated testing– solution: additional test statuses

3.9

Test statuses for automation

• other possible additional test statuses– test blocked– environment problem (e.g. network down, timeouts)– set-up problems (files missing)– test needs to be changed but not done yet

Compare to No differences found Differences found(true) expected outcome Pass Failexpected fail outcome Expected Fail Unknowndon’t know / missing Unknown Unknown

3.10

3-Pre and Post Processing

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Note in your Summary Sheet the key points for you from this session

Pre and Post

1 2 3

4 5 6

Summary: key points

• Pre- and post processing to automate setup and clear-up tasks• Test status is more than just pass / fail

3.11

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Scripting Techniques

Managing Successful Test Automation

Ref. Chapter 3: Scripting Techniques“Software Test Automation”

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice5 Comparison

4.1

Managing Successful Test Automation

Contents

Scripting

1 2 3

4 5 6

Objectives of scripting techniquesDifferent types of scriptsDomain specific test language

4.2

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Scripting

● the backbone of automation● four broad approaches

■ linear■ structured■ data-driven■ keyword-driven

● issues■ more code means more maintenance■ programmers like programming

► may write new code even when unnecessary● do you know your script to test ratio?

test specific programming required

framework supports abstraction

4.3

Objectives of scripting techniques

● implement your testware architecture● to reduce costs

■ make it easier to build automated tests► avoid duplication

■ avoid excessive maintenance costs► greater reuse of functional, modular scripting

● greater return on investment■ better testing support■ greater portability

► environments & hardware platforms● enhance capabilities

■ achieve more testing for same (or less) effort► testing beyond traditional manual approaches

probably best achieved by data-driven or keyword-driven4.4

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

re-usable modules / script library

Progression of automation implementation

low-levelinstructions

structuredcode

datadriven

keyworddriven

manual testscripts

inputdata

testdefinitions

get quote - motor, age, …create policy – motor, age, …

TechnicalTestAnalyst

TestAnalyst

easier maintenance

Source: Mark Fewster, Grove Consultants (grove.co.uk) 4.5

Managing Successful Test Automation

Contents

Scripting

1 2 3

4 5 6

Objectives of scripting techniquesDifferent types of scriptsDomain specific test language

4.6

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Example application

Tax Calculator

Name:

EarningsJanuary

FebruaryMarch

Tax band:Tax due:

input fields

calculated outputs

£290

B£45

Fred

£100£80

£110

Test 1:1. open application2. enter name & earnings3. save results4. close application

4.7

TestTool

Structured scripting

(manual) testprocedures

TestTool

software under test

create

testscripts

scriptlibrary

high-levelinstructions

and test datalow-level“how to”

instructions

automatorstesters

software under test

4.8

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

About structured scripting

● script library for re-used scripts■ part of testware architecture implementation

► shared scripts interface to software under test►all other scripts interface to shared scripts

● reduced costs■ maintenance

► fewer scripts affected by software changes■ build

► individual control scripts are smaller and easier to read (a ‘higher’ level language is used)4.9

Example structured scripts

10

Sub Test1()Call OpenApplication(“TaxCalculator”)Call CalculateTax(“Pooh”, 100, 150, 125)Call SaveAsAndClose(“Test 1 Results”)

End Sub

Sub OpenApplication(Application)Workbooks.Open Filename:= _

"ATT “ & Application & ”.xls“End Sub

Sub CalculateTax(Name, M1, M2, M3)ActiveCell.FormulaR1C1 = NameRange("C6").SelectActiveCell.FormulaR1C1 = M1Range("C7").SelectActiveCell.FormulaR1C1 = M2Range("C8").SelectActiveCell.FormulaR1C1 = M3

End Sub

Sub SaveAsAndClose(Filename)ActiveWorkbook.SaveAs Filename:= _

Filename & “.xls", FileFormat:= _xlNormal, Password:="“

ActiveWorkbook.CloseEnd Sub

Supporting scripts:

Main test script:

additional tests canbe created more easily

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Usable (re-usable) scripts

● to re-use a script, need to know:■ what does this script do?■ what does it need?■ what does it deliver?■ what state when it starts?■ what state when it finishes?

● information in a standard place for every script■ can search for the answers to these questions

4.11

TestToolTestTool

automatorstesters

Data driven

(manual) testprocedures softw

are under test

scriptlibrary

low-level“how to”

instructions

high-levelinstructions

test data

create

datafiles

create

controlscripts

4.12

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

About data driven

● test data extracted from scripts■ placed into separate data files

● control script reads data from data file■ one script implements several tests by reading different data files (reduces script maintenance per test)

● reduced build cost■ faster and easier to automate similar test procedures■ many test variations using different data

● multiple control scripts required■ one for each type of test (with varying data)

4.13

Example data-driven script

14

Name Earn1 Earn2 Earn3Pooh 100 150 125Piglet 75 90 80Roo 120 110 65

data file: TaxCalcData.csvSub RunTests()

For each Row in fileCall OpenApplication(“TaxCalculator”)Open TaxCalcData.csvRead NameRead Earn1Read Earn2Read Earn3Call CalculateTax(Name, Earn1, _

Earn2, Earn3)Call SaveAsAndClose(Test & “ Results”)

Next RowEnd Sub

main script (runs all tests in data file)

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

TestToolTestTool

automatorstesters

Keywords (basic)softw

are under test

(manual) testprocedures

create

testdefinitions

controlscript

high-levelinstructions

and test data

scriptlibrary low-level “how to”

instructions and keywords

single control script:“interpreter” / ITE

4.15

About keywords

● single control script (Interactive Test Environment)■ improvements to this benefit all tests (ROI)■ extracts high-level instructions from scripts

● ‘test definition’■ independent of tool scripting language■ a language tailored to testers’ requirements

► software design►application domain►business processes

● more tests, fewer scriptsUnit test: calculate

one interest payment

System test: summarise interest for one customer

Acceptance test: end of day run, all interest payments 4.16

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Example keyword-driven scripts

CalculateTax Pooh, 100, 150, 125CalculateTax Piglet, 75, 90, 80CalculateTax Roo, 120, 110, 65

Keyword file (option 1)

CalculateTaxPooh, 100, 150, 125Piglet, 75, 90, 80Roo, 120, 110, 65

Keyword file (option 2)

Keyword file (Robot Framework)Test Case Action Arg1 Arg2 Arg3 Arg4Check Tax Calculate Tax Pooh 100 150 125

Calculate Tax Piglet 75 90 80Calculate Tax Roo 120 110 65

CalculateTax Datafile.txtKeyword file (option 3)

Name Earn1 Earn2 Earn3Pooh 100 150 125Piglet 75 90 80Roo 120 110 65

Datafile.txt

4.17

Minimising automation code

● more code means more maintenance■ better to reuse existing code than write new code

► that does the same thing■ achieved by

► clear objectives– e.g. automate for minimal maintenance

►use of appropriate scripting approach– abstraction

► careful design– consider sets of tests, not individual tests

► consistency (standards)4.18

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Contents

Scripting

1 2 3

4 5 6

Objectives of scripting techniquesDifferent types of scriptsDomain specific test language

4.19

procedures/definitions

test

high-levelinstructions

and test data

TestTool

Merged test procedure/test definition

software under test

create

controlscript

high-levelinstructions

and test data

scriptlibrary

low-level “how to”instructions and keyword scripts

single control script:“interpreter” / ITE

language for testers 4.20

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Domain Specific Test Language (DSTL)

● test procedures and test definitions similar■ both describe sequences of test cases

►giving test inputs and expected results● combine into one document

■ can include all test information■ avoids extra ‘translation’ step■ testers specify tests regardless manual/automated■ automators implement required keywords

4.21

Keywords in the test definition language

● multiple levels of keywords possible■ high level for business functionality■ low level for component testing

● composite keywords■ define keywords as sequence of other keywords■ gives greater flexibility (testers can define composite keywords) but risk of chaos

● format■ freeform, structured, or standard notation

► (e.g. XML)4.22

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Example use of keywords

Create a new account, order 2 items and check out

Firstname Surname Email address Password

Create Account Edward Brown [email protected] apssowdr

Item Num Items Check Price for Items

Order Item 1579 3 15.30

Order Item 2598 12.99

Total

Checkout 28.29

4.23

Documenting keywords

Name Name for this keywordPurpose What this keyword doesParameters Any inputs needed, outputs producedPre-conditions

What needs to be true before using it, where valid

Post-conditions What will be true after it finishesError conditions

What errors it copes with, what is returned

Example An example of the use of the keyword

Source: Martin Gijsen. See also Hans Buwalda book & articles 4.24

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Example keyword Create account

*mandatory

Name Create account

Purpose Creates a new account

Parameters *First name: 2 to 32 characters

*Last name: 2 to 32 characters

*Email address: also serves as account id

*Password: 4 - 32 characters

Pre-conditions Account doesn't exist for this person

Post-conditions Account created (including email confirmation)

order Screen displayed

Error conditions Account already exists

Example (see example)

4.25

Example keyword Order item

*mandatory

Name Order item

Purpose Order one or more of a specific itemParameters *Item number: 1000 to 9999, in catalogue

Number of items wanted: 1 to Max-for-item. If blank, assumes 1

Pre-conditions Valid account logged inItem in stock (sufficient for the order)Prices available for item (including discounts for many)

Post-conditions Item(s) appear in shopping basketNumber of available items decreased by number ordered

Error conditions Insufficient items in stockExample (see example)

4.26

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Implementing keywords

● ways to implement keywords■ scripting language (of a tool)■ programming language (e.g. Java)■ use what your developers are most familiar with!

● ways of supporting a DSTL■ commercial, open source or home-grown framework■ spreadsheet or database for test descriptions

4.27

Frameworks

● commercial tools■ ATRT, ATSM, Axe, Certify, eCATT, FASTBoX, GUIdancer, Liberation, Ranorex, te52, TestComplete, TestDrive, TestingWhiz, Tosca Testsuite, zest

● open source■ Cacique, FitNesse, Helium, Jameleon, JET, JSystem, Jubula, Maveryx, Open2Test, Power Tools, QAliber Test Builder, Rasta, Robot Framework, SAFS, SpecFlow, STAF, TAF, TAF Core, TestMaker, Xebium

● I can email you my Tool List ■ test execution and framework tools■ [email protected]

4.28

4-Scripting

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

TestTool

software under test

scriptlibraries

tool dependenttool independent

Execution-tool-independent framework

frame-work

testprocedures/definitions

tool independentscripting language

AnotherTestTool

sut

TestTool

software under test

scriptlibraries

some tests run manually

4.29

Managing Successful Test Automation

Note in your Summary Sheet the key points for you from this session

Scripting

1 2 3

4 5 6

Summary

● objectives of good scripting■ reduce costs, enhancecapabilities

● many types of scripting■ structured, data-driven,keyword

● keyword/DSTL the mostsophisticated■ yields significant benefits

● increased productivity■ customised front end,tool independence

4.30

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Automated Comparison

Managing Successful Test Automation

Ref. Chapter 4: Automated Comparison“Software Test Automation”

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice5 Comparison

5.1

Managing Successful Test Automation

Contents

Comparison

1 2 3

4 5 6

Automated test verificationTest sensitivity

5.2

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Perverse persistence: Michael Williamson

■ testing Webmaster tools at Google (new to testing)■ QA used Eggplant (image processing tool)■ new UI broke existing automation■ automate 4 or 5 functions■ comparing bitmap images – inaccurate and slow■ testers had to do automation maintenance

►not worth developers learning tool’s language■ after 6 months, went for more appropriate tools■ QA didn’t use the automation, tested manually!

► tool was just running in the backgroundChapter 17, pp 321-338, Experiences of Test Automation 5.3

Checking versus testing

■ checking confirms that things are as we think►e.g. check that the code still works as before

■ testing is a process of exploration, discovery, investigation and learning►e.g. what are the threats to value to stakeholders, give information

■ checks are machine-decidable► if it’s automated, it’s probably a check

■ tests require sapience► including “are the checks good enough”

Source: Michael Bolton, www.developsense.com/blog/2009/08/testing-vs-checking/5.4

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

General comparison guidelines

● keep as simple as possible● well documented● standardise as much as possible● avoid bit-map comparison● poor comparisons destroy good tests● divide and conquer:

■ use a multi-pass strategy■ compare different aspects in each pass

5.5

Two types of comparison

● dynamic comparison■ done during test execution■ performed by the test tool■ can be used to direct the progress of the test

► e.g. if this fails, do that instead■ fail information written to test log (usually)

● post-execution comparison■ done after the test execution has completed■ good for comparing files or databases■ can be separated from test execution■ can have different levels of comparison

► e.g. compare in detail if all high level comparisons pass5.6

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Scribble

Comparison types compared

diffs.txt

Differences

Log

log.txt

countries2.dcm

ExpectedOutput

countries.dcm

InitialDocument

countries2.dcm

EditedDocument

Compare

Test script:-test input-comparisoninstructions

scribble1.scp

Error messageas expected?

dynamic comparison

post-executioncomparison 5.7

Comparison process

● few tools for post-execution comparison● simple comparators come with operating systems but do not have pattern matching

■ e.g, Unix ‘diff’, Windows ‘UltraCompare’● text manipulation tools widely available

■ sed, awk, grep, egrep, Perl, Tcl, Python● use pattern matching tools with a simple comparator to make a ‘comparison process’● use masks and filters for efficiency

5.8

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Contents

Comparison

1 2 3

4 5 6

Automated test verificationTest sensitivityContents

5.9

Test sensitivity

● the more data there is available:■ the easier it is to analyse faults and debug

● the more data that is compared:■ the more sensitive the test

● the more sensitive a test:■ the more likely it is to fail■ (this can be both a good and bad thing)

5.10

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Sensitive versus specific(robust) test

Unexpectedchange occurs

Sensitive testverifies theentire outcome

Test outcome

Specific testverifies thisfield only

Test is supposedto change onlythis field

5.11

Unexpectedchange occursfor every test

Three tests,each changesa different field

If all tests aresensitive, they

all show theunexpected change

If all tests arespecific, theunexpected

change ismissed

Testoutcome

Too much sensitivity = redundancy

5.12

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Using test sensitivity

● sensitive tests:■ few, at high level■ breadth / sanity checking tests■ good for regression / maintenance

● specific/robust tests:■ many, at detailed level■ focus on specific aspects■ good for development A good test

automation strategy will plan

a combination of sensitive and specific

tests 5.13

Managing Successful Test Automation

Note in your Summary Sheet the key points for you from this session

Comparison

1 2 3

4 5 6

Summary: key points

• Balance dynamic and post-execution comparison• Balance sensitive and specific tests• Use masking and filters to adjust sensitivity

5.14

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Comparison example

● Expected output■ Date 11 May 2011■ Order No X43578■ Login J Smith■ Add 1 mouse■ Add 1 mountain bike■ Add_to_total $15.99■ Add_to_total $249.99■ Total_due $265.98■ Logout J Smith

● Actual output■ Login M Jones■ Order No X54965■ Add 1 mountain bike■ Add_to_total $249.99■ Add 1 toaster■ Add_to_total $35.45■ Logout J Smith■ Total_due $285.34■ Date 16 Aug 2011

Has this test passed?5.15

Simple automated comparison

● Expected output■ Date 11 May 2011■ Order No X43578■ Login J Smith■ Add 1 mouse■ Add 1 mountain bike■ Add_to_total $15.99■ Add_to_total $249.99■ Total_due $265.98■ Logout J Smith

● Actual output■ Login M Jones■ Order No X54965■ Add 1 mountain bike■ Add_to_total $249.99■ Add 1 toaster■ Add_to_total $35.45■ Logout J Smith■ Total_due $285.34■ Date 16 Aug 2011

Fail

Fail

Fail

Fail

Fail

Fail

Fail

Fail

Fail

5.16

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Filter: alphabetical order

● Expected output■ Add 1 mountain bike■ Add 1 mouse■ Add_to_total $15.99■ Add_to_total $249.99■ Login J Smith■ Logout J Smith ■ Date 11 May 2011■ Order No X43578■ Total_due $265.98

● Actual output■ Add 1 mountain bike■ Add 1 toaster■ Add_to_total $249.99■ Add_to_total $35.45■ Login M Jones■ Logout J Smith■ Date 16 Aug 2011■ Order No X54965■ Total_due $285.34

Pass

Fail

Fail

Fail

Fail

Pass

Fail

Fail

Fail

1st Pass is a coincidence, 2nd Pass is actually a bug!5.17

Filters: replacement by object type

● Expected output■ Add <item>■ Add <item>■ Add_to_total $15.99■ Add_to_total $249.99■ Login J Smith■ Logout J Smith ■ Date <date>■ Order No <orderno>■ Total_due $265.98

● Actual output■ Add <item>■ Add <item>■ Add_to_total $249.99■ Add_to_total $35.45■ Login M Jones■ Logout J Smith■ Date <date>■ Order No <orderno>■ Total_due $285.34

This has helped eliminate things we aren’t interested in

Pass

Pass

Fail

Fail

Fail

Pass

Pass

Pass

Fail

5.18

5-Comparison

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Filters: replacement for everything?

● Expected output■ Add <item>■ Add <item>■ Add_to_total $<amt>■ Add_to_total $<amt>■ Login <name>■ Logout <name> ■ Date <date>■ Order No <orderno>■ Total_due $<amt>

● Actual output■ Add <item>■ Add <item>■ Add_to_total $<amt>■ Add_to_total $<amt>■ Login <name>■ Logout <name>■ Date <date>■ Order No <orderno>■ Total_due $<amt>

Pass

Pass

Pass

Pass

Pass

Pass

Pass

Pass

Pass

Everything passed – isn’t this great? Actually, no – not checking name or total

5.19

Variables – what needs to be done

● Expected output■ Add 1 <item>■ Add 1 <item>■ Add_to_total T1=$15.99■ Add_to_total T2=$249.99■ Login NAME=J Smith■ Logout J Smith=NAME? ■ Date <date>■ Order <orderno>■ Total_due $265.98 =T1+T2?

● Actual output■ Add 1 <item>■ Add 1 <item>■ Add_to_total T1=$249.99■ Add_to_total T2=$35.45■ Login NAME=M Jones■ Logout J Smith =NAME?■ Date <date>■ Order <orderno>■ Total_due $285.34 =T1+T2?

“Variable=” implemented as “store but ignore in comparison”“=Variable/expression” implemented as “check it is equal to”

P

P

P

P

P

F

P

P

F

5.20

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Final Advice and Direction

Managing Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice5 Comparison

6.1

What next?

● we have looked at a number of ideas about test automation today● what is your situation?

■ what are the most important things for you now?■ where do you want to go?■ how will you get there?

● make a start on your test automation strategy now■ adapt it to your own situation tomorrow

6.2

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Strategy exercise

● your automation strategy / action plan■ review the exercises

►automation objectives, 3rd page► responsibility►measurement►architecture

● Your strategy■ identify the top 3 changes you want to make to your automation■ note your plans now

6.3

Dealing with high level management

● management support■ building good automation takes time and effort■ set realistic expectations

● benefits and ROI■ make benefits visible (charts on the walls)■ metrics for automation

► to justify it, compare to manual test costs over iterations►automation health: on-going continuous improvement

– build cost, maintenance cost, failure analysis cost– coverage of system tested

6.4

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Dealing with developers

● critical aspect for successful automation■ automation is development

►may need help from developers►automation needs development standards to work

– testability is critical for automatability– why should they work to new standards if there is “nothing in it for them”?

■ seek ways to cooperate and help each other► run tests for them

– in different environments– rapid feedback from smoke tests

►help them design better tests?6.5

Standards and technical factors

● standards for the testware architecture■ where to put things■ what to name things■ how to do things

►but allow exceptions if needed● new technology can be great

■ but only if the context is appropriate for it (e.g. Model-Based Testing)● use automation “outside the box”

6.6

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

On-going automation

● you are never finished■ don’t “stand still” - schedule regular review and re-factoring of the automation■ change tools, hardware when needed■ re-structure if your current approach is causing problems

● regular “pruning” of tests■ don’t have “tenured” test suites

► check for overlap, removed features►each test should earn its place

6.7

Information and web sites

■ www.TestAutomationPatterns.org■ www.AutomatedTestingInstitute.com

►TestKIT Conference, Washington DC■ tool information

► commercial and open source: http://testertools.com►open source tools

– www.opensourcetesting.org– http://sourceforge.net– http://riceconsulting.com (search on “cheap and free tools”)

►LinkedIn group: QA Automation Architect■ www.ISTQB.org

►Expert level in Test Automation (in progress)6.8

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Managing Successful Test Automation

Note in your Summary Sheet the key points for you from this session

Advice

1 2 3

4 5 6

Summary: successful test automation• assigned responsibility for automation tasks• realistic, measured objectives (testing ≠ automation)• technical factors – architecture, levels of abstraction, DSTL, scripting, comparison, pre and post processing• management support, ROI, continuous improvement

Free book!

6.9

and now …

● any final questions / comments?● please evaluate this tutorial (vs its objectives)

■ high mark – thanks very much!■ low mark – please explain (so we can improve)

► (see Session 0 for tutorial objectives, what is covered and what is intentionally excluded)● email Dot for ROI spreadsheet / tool list ([email protected])● email Mark for automation consultancy & questions ([email protected])

6.10

6-Advice

presented by Mark [email protected]

© Mark Fewster &Dorothy Graham 2014

Any more questions?Please email me!

[email protected]

Thank you for coming todayI hope this will be useful for you

All the best with your automation!

Managing Successful Test Automation

6.11