Mal12 qa tand-automatedtesting

Preview:

Citation preview

MAL12: QAT and Automated

Testing of Modern Apps

Andy TinkhamPrincipal Lead Consultant, QAT

Level: Intermediate

Personal Information

• http://www.testerthoughts.com

• http://www.twitter.com/andytinkham

• andyt@magenic.com

• Office hours:

http://ohours.org/andytinkham

• Slides available at

http://slideshare.net/andytinkham

Previously, on Modern Apps Live…

Quality is team’s job, not just testers’

Team needs info to make decisions

Testing activities provide information

Assign testing activities to people best able to provide that information

Yesterday, we talked about the types of

information that developers can provide.

Today, we’ll talk about the information the

testers can best provide.

Tester-provided Information

Sto

ry o

f th

e P

roduct • How it can work

• How it fails

• How it might fail in ways that matter to our clients

Sto

ry o

f th

e T

estin

g • What we’ve done & seen

• Where we haven’t gone

• Where we won’t be going

• What problems did we find? To whom? Why?

Sto

ry o

f th

e T

estin

g Q

ua

lity • Why we did the

testing we did

• Why this is (or isn’t) good enough

• What we need to get more information

• Risks and costs

• Impediments to testing

Hat tip to Michael Bolton, DevelopSensehttp://www.developsense.com/blog/2012/02/braiding-the-stories/

Ultimately…

Each bit of each story leads to a reduction in

Risk

• Risks have different levels of importance

– Potential impact if problem occurs

– Likelihood of problem occurrence

• Tasks address different amounts of risk

• Schedules slip – we might not get

to do all the testing we want

• Use risk to prioritize testing efforts

- risk-based testing!

Risk-based testing

Test for the biggest risks first

Use risks as inputs to test design

Identifying risk

Technical

• Areas where path to develop unclear

• Explicit assumptions

• Foundational architecture pieces

• Areas where problems have occurred historically

Business

• Key functionality

• Differentiators used by sales

• Areas where problems have occurred historically

MyVote Example Risks

• Schedule risks

• Unable to create a poll

• Can’t access previously created polls

• Problems when authentication service is

down

• Analysis looks at wrong set of answers

Translating risk to test cases

• After prioritizing risks, begin by asking

“How can I tell if this problem occurs?”

• Explore wording to see if additional

meanings appear

– Analysis looks at WRONG set of answers

Subset

Superset of right answers plus extras

No overlap with right answers

Running Tests

• Pick tests

– Importance of risks they address

– Value of information running a test will provide

• As you progress, feedback learning into

process

– Reprioritize risks

– Revalue information

– Create new tests

– Skip tests

How can devs help the test team?

• Communicate risks that you see to testers

– Where are you less sure about how to develop

functionality?

– What questions did you have while you were

designing & developing?

– What assumptions did you make while coding?

• Review risks identified

– More or less serious impacts?

– Different likelihoods?

– Missing risks?

Risk-based testing in MyVote

• Having risks identified meant that we

could deal with time crunch

• Not everything got tested – but we used

the time we did have as effectively as we

could

• Were able to treat some risks as covered

by dev testing and focus more on other

risks

Risks give us a lot of insight into

what could go wrong, but how

do we address the things we

can’t predict?

Session-based Exploratory

Testing

• Time-boxed testing on a focused topic

• Not following pre-designed test cases

• Learning from previous tests guides next

steps

Session-Based Approach

• Use a single focus as a charter

– Check all the menus

– Investigate the order sorting functionality

• Setup an uninterrupted time box (generally

1-2 hours)

• Work through test ideas,

continually integrating your

learning as you design new tests

During the session

• Keep a record of what you do

– Written notes

– VS 2012 exploratory testing session recording

– Rapid Reporter (http://testing.gershon.info/reporter/)

• Keep lists

– Bugs found

– Additional charters to cover/things to

investigate

– Additional test ideas for this charter

After the session

• Debrief session for knowledge

dissemination

– [Past] What happened during the session?

– [Results] What was achieved during the session?

– [Obstacles] What got in the way of good testing?

– [Outlook] What still needs to be done?

– [Feelings] How does the tester feel about all this?

Planned Executing Tests in

MyVote

• Each feature had a work item per platform

• Assigned work items to testers

• Each work item becomes charter

• Debriefed after testing to share

information & review for additional

session needs

Reality for MyVote

• Scaled back to just a small number of

sessions due to time constraints

• Each session focused on the app on a

given platform

• No debriefing

• Results: Not as strong a testing effort as

hoped

Automated Testing

• Any test can be more or less automated –

it doesn’t have to be fully automated

• The key is to approach automation from a

task perspective, not a test perspective

• Possible tasks

– 1 or more tests

– 1 or more test steps

– 1 or more supporting tasks

Choosing tasks to automate

• Choose tasks that take advantage of

computer’s strengths rather than just

automating existing human-focused tests

• Plan automation in conjunction with rest of

test planning

• Look for access points below UI

Potential pitfalls

• Automation takes time

– Creation time

– Maintenance time

• Easy to shift focus from automation

adding value & providing useful

information to cool to automate

How devs can help with

automation

• Share unit testing infrastructure and

architecture components

• Code reviews of automation code

• Pair with testers

Non-Functional Testing

• Not all important test cases focus on

functionality

• When identifying risks, think about things

like impacts of slow performance, lack of

usability, and lack of security

• Don’t leave non-functional testing until the

end – build in monitoring & tests from

start

Tester-provided Information

Sto

ry o

f th

e P

roduct • How it can work

• How it fails

• How it might fail in ways that matter to our clients

Sto

ry o

f th

e T

estin

g • What we’ve done & seen

• Where we haven’t gone

• Where we won’t be going

• What problems did we find? To whom? Why?

Sto

ry o

f th

e T

estin

g Q

ua

lity • Why we did the

testing we did

• Why this is (or isn’t) good enough

• What we need to get more information

• Risks and costs

• Impediments to testing

Hat tip to Michael Bolton, DevelopSensehttp://www.developsense.com/blog/2012/02/braiding-the-stories/

http://ohours.org/andytinkham

andyt@magenic.com

URLs from the discussion after

the talk

• http://testingeducation.org – free video

lectures & slides on software testing for a

college-level black-box software testing

course

• http://testobsessed.com/wp-

content/uploads/2011/04/testheuristicsche

atsheetv1.pdf - Elisabeth Hendrickson’s

Test Heuristics Cheat Sheet with lots of

test ideas

Recommended