22
By Kishore Abid FUNCTIONAL TEST AUTOMATION APPROACH

Functional Test Automation

Embed Size (px)

Citation preview

By Kishore Abid

FUNCTIONAL TEST AUTOMATIONAPPROACH 

 

  Of every four dol lars spent on software development, at least one dol lar goes

towards test ing. Of th is dol lar , at least 70 cents go towards funct ional test ing. With in funct ional test ing, test case des ign is the key determinant of the qual i ty of test ing, but i t i s test execut ion that is the s ingle largest consumer of eff ort . This eff ort mult ip l ies rapid ly wi th the number of confi gurat ions to be tested, is requi red in cyc les , and is repet i t ive to a s ignifi cant extent. Companies are constant ly look ing for ways to ‘ rat ional ize’ th is cost and enable comprehensive test ing, rather than having to make r isky choices about which plat forms to test thoroughly. Test automation off ers an elegant way to tack le th is chal lenge, though i t i s by no means an instant panacea. This expla ins why wor ldwide spending on test automation is growing 10% faster than spending on test ing as a whole.

Test automation refers to the development and usage of tools to determine the success or fa i lure of pre‐spec ifi ed test cases, against the Appl icat ion Under Test (AUT), wi thout human input. The pr imary object ive of test automation is to reduce repet i t ive manual tasks .

Several commerc ia l ly avai lable test automation tools a l low the recording of user act ions taken on the AUT. The recording sess ion generates scr ipts that can later be replayed without human input.

This presentat ion attempts to out l ine the what, why, how and when of test automation. I t focuses more on automation for the QA team, as compared to that for the Development team.

INTRODUCTION

An automation test suite can be built by simply recording various test cases. However, it is often possible to signifi cantly enhance the reusabil ity and maintainabil ity of such suites, by developing frameworks for automation.

Reusability

Consider an AUT that has a combo with fi ve possible values. Further, assume there are fi ve separate test cases, each using a diff erent value, but identical in other respects. I f we use the ‘record and replay’ approach, we would have to record fi ve diff erent test cases. Instead, we can abstract the combo as an argument to be passed to the script, and then call the same script fi ve t imes with diff erent arguments.

There are certain user actions – log on, for instance – that might be common to several test cases. Again, these actions (sequences) can be abstracted out and reused in several automated test cases, rather than recording the same sequence mult iple t imes. Short action sequences can also be used to compose long ones.

AUTOMATION FRAMEWORKS

Mainta inabi l i ty

Cons ide r a webs i te ( the AUT) that has a se t o f common ly needed l inks appear ing in severa l web pages . Fur the r assume that we have automated the tes t ing o f th i s webs i te . 

I n the absence o f any f rameworks , the ident ifi cat ion o f the l i nks w i l l be a rb i t r a ry: i t cou ld be indexed on the current page layout , f o r ins tance . I n a fu tu re ve r s ion o f the webs i te , these ind ices cou ld change – say , due to the add i t ion o f o the r l i nks . I n th i s case , each tes t sc r ip t invo lv ing a web page that has th i s se t o f l i nks w i l l be b roken , and w i l l need rec t ifi cat ion . Th i s cou ld mean a s ign ifi cant rework eff or t , as the re cou ld be many such pages and pe rhaps the who le webs i te .

  Addit iona l benefi ts of a f ramework  A f ramework makes i t easy to p rov ide except ion hand l ing and wa i t mechan isms , both o f wh ich

fu r the r inc rease robus tness .  A we l l des igned se t o f u t i l i t i e s in the f ramework makes i t easy to c reate a dashboard o f the

tes t resu l t s .

I ns tead , i f named re fe rences a re p rov ided to these l i nks and on ly these names are re fe rred by the tes t sc r ip t s ; then changes w i l l be needed on ly to ( re )map the names to the ac tua l l i nks .

  Conceptua l l y , a f r amework e l im inates ‘ hard cod ing ’ and p rov ides ‘modu la r i za t ion ’ . Based on

our expe r ience , we es t imate that i n the long run , up to 50% o f the sc r ip t deve lopment and ma in tenance eff or t can be saved by inves t ing in c reat ing an automat ion f ramework .

CONTD…..

Automation frameworks can broadly be classifi ed into three types

Data driven: In th is approach, var iables are used to hold test data. At runt ime, these variables could be loaded from an external data source. This approach reduces the problem of hard coding. Note that ident ifi cat ion of GUI e lements is st i l l hard coded; for instance, the scr ipt might contain instruct ions that eff ect ively mean ‐ Select the image whose index number is 3.

Keyword driven: In th is approach, the input, user act ions and expected output are encoded using keywords that are typical ly independent of the AUT. A test case is encoded as a record composed of these keywords. Test suites composed of such test cases are typical ly stored in tables. As part of the framework development, scr ipts are writ ten to translate these records to a specifi c AUT.

  Th is approach reduces the problem of hard coding and also provides modular izat ion.

Hybrid: This approach combines the two approaches out l ined above, and br ings in benefi ts derived from both. Over a per iod of t ime, hybr id frameworks have emerged as the de facto standard for automation requirements. Based on our experience, we recommend ser ious evaluat ion of the hybrid approach dur ing framework design.

TYPES OF FRAMEWORKS

TEST AUTOMATION LIFE CYCLE (TALC)

Broadly, the Test Automation Life Cycle can be depicted as follows. It should not be assumed that eff orts (person weeks) will be proportional to the schedules (calendar weeks) indicated.

RELEASE CYCLE

N N + 1

Month 1 2 3 4 5 6 7 8 9 10 11 12

Study X X

Frame work design

X

Scripting X X

Execution & Reporting

X X X X X X X X

Script Maintenance

X X X

Scripting X X X X

The automation project should ideally be divided into four distinct phases as described below. Study ScriptingExecution and reportingMaintenance

It is assumed that test case design is already done, and is not treated as part of the automation eff ort.

Identifi cation of a few (could be just two) test cases that are fairly complex

Identifi cation of candidate tools. Several (say six) tools may be compared and a few (say, two or three) may be tagged as candidates. This short‐listing should be based on a multiple criteria, some of them being support for the AUT’s underlying technology, scripting languages, cost etc.

POC (Proof of Concept) automation of the selected test cases using the candidate tools. The main intent is to discover problems that a particular automation tool may have with the AUT, and solutions to those problems. At this point, the focus is not on elegance in scripting, script reuse etc.

Broad assessment of the long term returns from the automation initiative

If the above assessment confi rms sponsorship for the automation initiative, then recommend the tool to be used, and the test cases to be taken up in the fi rst automation cycle.

STUDY

Deliverables from this phase Broad assessment of the long term returns from automation Recommendation on the tool to use List of test cases to be taken up in the fi rst automation cycle Data for estimation of subsequent phases

Framework Design This phase is similar to the high level design phase in the SDLC

(Software Development Lifecycle) and involves deciding the type of framework to create, how to orchestrate the execution, and so on.

Deliverables from this phase Architecture for the framework High level fl owchart for the way test case execution wil l be

orchestrated Format for reporting test results Test management strategy if not already decided List of the main keywords and functions that wil l be required

CONTD…

The scripting phase is similar to the construction (coding) and unit testing phase in the SDLC. It includes verification to ensure that an automatically executed test case returns the exact same result as the manually executed test case.

Deliverables from this phaseAutomated test cases Test result reporting mechanism In order to precisely estimate the eff ort required for scripting, it is recommended that the following points are identified for each test case to be automated

SCRIPTING

Action points: They represent the number of user act ions (c l icks , se lect ions , etc . ) that are required whi le execut ing the test case. Conceptual ly , they are s imi lar to ‘ funct ion points ’ in appl icat ion development, but are at a much fi ner leve l of granular i ty.  Verifi cation points: These are of two types

Checks required for the automation itself to work correctly. For instance, has a link appeared before we attempt to click it?

 Checks that are de l iberate ly inserted to val idate the behavior of the AUT. These are conceptual ly s imi lar to ‘asserts ’ used in appl icat ion development.

ver ifi cat ion point of ten takes more eff orts to develop than an act ion point .  As a guid ing ind icator , scr ipt ing eff ort is l inear ly proport ional to the number of

act ion points and ver ifi cat ion points . The other factors that s ignifi cant ly impact the scr ipt ing eff ort are

Recording support: Whi le most tools provide support for record ing, some don’t (e .g . Unifi ed TestPro) . In such cases , h igher eff orts are needed.

Level of reuse: Higher the level of reuse of act ion and ver ifi cat ion points , the lower the eff ort needed.

Wait points: More the wai t points and the i r complexi ty, h igher the eff ort needed. AUT(s): Higher the complexi ty of the AUT, the higher the eff ort needed. Also, greater

eff ort i s typ ical ly needed when more than one AUT is involved. The archi tecture of the AUT also impacts the scr ipt ing eff ort needed (e .g . c l ient‐server versus web) .

CONTD…

This phase involves the actual execution of the automated test cases, and reporting the results. This is an ongoing phase and is almost completely automatic.

Deliverable from this phaseTest results (summary and detailed)

EXECUTION AND REPORTING

Maintenance involves modifying and updating the automated test cases as required, to ensure that they remain true to the intent of the test. Typical reasons necessitating maintenance are–

A GUI change in the AUT Enhancement of the test case

Deliverable from this phase  Updated / enhanced automated test cases  While good framework design will minimize the need

to rework the scripts in response to a change like GUI enhancement in the AUT, it is usually diffi cult to eliminate this need completely.

MAINTENANCE

Investment in automat ion has two key components   Investment in the automat ion too l   Investment in the eff ort requ i red for deve lop ing, execut ing and mainta in ing the

automated test su i te

Cost of the Automation Tool Typica l s ing le user , perpetua l l i censes cost in the range o f USD 5,000 for a named

user l i cense, and USD 10,000 for a fl oat ing l i cense. AMCs are usua l ly 20% of the l i cense costs . Some of the most success fu l commercia l tes t automat ion too ls are prov ided by vendors l i ke HP, IBM 

Cost of the Automation Eff ort Est imates d iscussed here are a broad representat ion and are based on our

exper ience wi th tes t automat ion. Actua l es t imates would pr imar i ly depend on the AUT, automat ion too ls and the actua l tes t cases to be automated

Study would take four ‐e ight person weeks o f eff ort . Framework des ign would take two‐ four person weeks o f eff ort . Scr ipt ing: As ind icated ear l ier , we recommend po int based est imat ion. However , i f

a rough est imate i s needed in the ear ly phases o f the automat ion pro ject , then the number o f web pages / fo rms, contro ls embedded in them, etc . can be used to arr ive at an est imate o f the number o f test cases , wh ich can then be t rans lated into an eff ort est imate .

Report ing: Creat ing the resu l t report ing mechanism would take one person week o f eff ort .

COST OF TEST AUTOMATION

Shorter test cycles A ‘test cycle’ refers to a single run of a set of test cases. For instance,

a system test cycle means that al l system test cases have been run. Automation signifi cantly reduces the time required to execute a test cycle through

Faster action triggering: Typically, a script can trigger actions on the AUT much faster than a human. However, actual actions triggered (UI interactions done) during a manual test might be diff erent from those during an automated test due to a l imitation of the automation tool.

24x7 replays: Since it is rare for manual testing to be done for more than eight working hours in a day, the fact that automated replays can run continuously, off ers a signifi cant reduction in the test cycle time.

A 50‐70% reduction in the test cycle time is common. Saving of manual testers’ time The saving in a test engineer’s t ime is almost l inearly proportional to

the number of test cases automated, and the number of test cycles to be executed

BENEFITS OF TEST AUTOMATION

Repeatability In complex test environments (those involving several application

components, platforms, environment variables, etc.) human error can creep into manual test execution. Automation ensures 100% repeatabil ity and hence greater predictabil ity in execution.

Enabling non‐functional testing – synergy with other quality tools With additional tools and eff ort, it is often possible to confi gure

special runs of the automated test cases in order to perform non‐functional testing, for example:

Performance testing Scalabil ity/ load testing Memory profi ling Code coverage and impact analysis Depending on the project’s priorities, the above benefi ts can be

translated into higher quality, lower costs or lesser time to market.

CONTD…

There is a wide spectrum of tasks that can be automated

Test data generation (particularly useful for load testing)

Unit testing Integration testing System testing involving a single AUT System testing involving multiple AUTs and their

integrations AUT installation

WHAT CAN BE AUTOMATED

The choice of the tool is often restricted by the technology underlying the AUT.

In web applications, multi‐window test cases are usually diffi cult to automate. Pop‐ups, single child windows are not a problem.

Integrations between AUTs are sometimes diffi cult to automate.

 CHALLENGES IN AUTOMATION

We recommend the following best practices for automation.

Since automation frameworks are essentially about abstraction, an important set of best practices deals with ensuring loose coupling between The test data and the test scripts,  Test scripts themselves,  The automation framework and the AUT(s),  The test cases and the automation framework, and  The automation framework and the automation tool.

AUTOMATION BEST PRACTICES

The entit ies l isted above wi l l essential ly have to reference one another to form a complete working test automation system. I t is important that these references be through wel l defi ned interfaces only

Keyword names should be careful ly chosen, so that human readabi l i ty is also high. This enables gradual transit ioning from manual test ing to automated test ing.

Avoid dupl icat ion in scripts. Any dupl icat ion should be investigated to check whether a separate unit (say, function) can be created.

Verifi cation points should be judiciously inserted into the scripts. In case of test case fai lure, these points accelerate the process of zeroing in on the reason of the fai lure.

The development of an automation framework is s imi lar to the development of an appl icat ion in several respects, and hence should be planned and tracked as a (sub) project in i tsel f. I t should be noted that framework creation and test case design are dist inct act iv it ies (and require diff erent ski l ls) .

Simpler test cases should be automated before complex ones. This makes i t easy for later scripts to bui ld on earl ier ones. There is an exception to this approach though: fair ly complex test cases should be taken up in the study phase, as the objective there is to discover problems in automation (rather than achieving a high level of reuse).

COND…

Test automation off ers a promising way of quality and productivity improvement in the area of software testing – particularly in products. While manual testing is required and also desired (except perhaps for a product that is purely in sustenance mode), the time and cost required for it can be significantly reduced. Moreover, a part of this saving can be invested for better quality.

Commercial tools and a rapidly growing body of knowledge have led to a reduction in the time needed for monetary returns to be seen, thus accelerating the adoption of test automation in the industry.

We recommend that for all product development, at least the study phase of the test automation lifecycle should be undertaken.

CONCLUSION