12
Effective Automated Testing with a DSTL Martin Gijsen Test automation architect © 2009 Abstract: This article will demonstrate how a Domain Specific Test Language (DSTL) can be defined and implemented. The maintenance sensitivity of the testware will be minimized and its maintainability maximized using only a few rules. This results in an easy to use, low maintenance automated test solution. Examples will use the freeware ETA (Essential Test Automation) Framework. * * * Introduction Test automation for functional testing requires an investment, in time as well as in money. To maximize the return on investment, the benefits should be reaped for as long as possible. They should preferably keep flowing for as long as the System Under Test (SUT) is being maintained and tested. While developing into a test automation architect over the course of about ten years, I have found that following certain rules goes a long way in obtaining these effective automated tests. This article will list these rules and explain them using examples. I hope that they will help the reader as they have helped me. A key feature of the approach that the rules define is the DSTL, or Domain Specific Test Language. A DSTL is an easy to use test language, designed to test a specific system as it evolves over time. It does not (normally) look like a programming language. The DSTL will evolve with the SUT. With a properly designed test automation solution, this is just fine. Note that, although many tools do not allow creating a DSTL in the exact way suggested here, the same rules can be applied in that context and their benefits harvested. The example being used concerns a hypothetical banking system for private clients. A client can open one or more accounts, deposit or withdraw an amount, and transfer an amount between accounts. Rule number one: Support tester friendly tests In order for testers to be both willing and able to actually use a solution for automated testing, it is important to make the solution tester friendly. Although some testers are quite comfortable using the scripting languages that test automation tools often use, they are a minority, and fewer still are good at it. Much like few developers are good testers. The most effective way to deal with this fact is to accept it and find a way in which testers can do what they are good at, while others take care of the other relevant tasks. This effectively means that the test analysis part of automated testing should require no programming. It also implies that someone with programming skills will have to take care of the technical part as an automator. The activities will therefore be divided over two roles, the test analyst and the automator. The test analyst will focus on what to test rather than on the details of how to test it. This requires the analytical skills that testers normally need but no programming skills. This

Effective Automated Testing with a DSTL€¦ · Effective Automated Testing with a DSTL Martin Gijsen Test automation architect ... The document also serves as a requirements document

  • Upload
    voque

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Effective Automated Testing with a DSTLMartin Gijsen

Test automation architect© 2009

Abstract:This article will demonstrate how a Domain Specific Test Language (DSTL) can be defined and implemented. The maintenance sensitivity of the testware will be minimized and its maintainability maximized using only a few rules. This results in an easy to use, low maintenance automated test solution. Examples will use the freeware ETA (Essential Test Automation) Framework.

* * *

IntroductionTest automation for functional testing requires an investment, in time as well as in money. To maximize the return on investment, the benefits should be reaped for as long as possible. They should preferably keep flowing for as long as the System Under Test (SUT) is being maintained and tested. While developing into a test automation architect over the course of about ten years, I have found that following certain rules goes a long way in obtaining these effective automated tests. This article will list these rules and explain them using examples. I hope that they will help the reader as they have helped me.

A key feature of the approach that the rules define is the DSTL, or Domain Specific Test Language. A DSTL is an easy to use test language, designed to test a specific system as it evolves over time. It does not (normally) look like a programming language. The DSTL will evolve with the SUT. With a properly designed test automation solution, this is just fine. Note that, although many tools do not allow creating a DSTL in the exact way suggested here, the same rules can be applied in that context and their benefits harvested.

The example being used concerns a hypothetical banking system for private clients. A client can open one or more accounts, deposit or withdraw an amount, and transfer an amount between accounts.

Rule number one: Support tester friendly testsIn order for testers to be both willing and able to actually use a solution for automated testing, it is important to make the solution tester friendly. Although some testers are quite comfortable using the scripting languages that test automation tools often use, they are a minority, and fewer still are good at it. Much like few developers are good testers. The most effective way to deal with this fact is to accept it and find a way in which testers can do what they are good at, while others take care of the other relevant tasks. This effectively means that the test analysis part of automated testing should require no programming. It also implies that someone with programming skills will have to take care of the technical part as an automator.

The activities will therefore be divided over two roles, the test analyst and the automator. The test analyst will focus on what to test rather than on the details of how to test it. This requires the analytical skills that testers normally need but no programming skills. This

makes automated testing with this approach a pure testing activity. The way tests are written is more formal than usual but not really 'technical.' The automator focuses on the technical side only, making sure that tests run. This does not require in depth knowledge of SUT functionality and is a pure development role, to be assigned to a software engineer.

It would seem, if a tester is also a fair programmer, that taking the technical part out of the testing is less important. This is not the case. To begin with, clearly separating test analysis and test automation is an example of the 'separation of concerns' principle. This is important to avoid making things more complicated than necessary and always a good idea. Also, consider what will happen to the test solution if its author is unable to maintain it any longer, for example after moving to another project or company. This is a serious threat to the continuity of the test solution that has already resulted in a premature end for many automated testing efforts. It is wise not to rely on a particular technical tester, or even the presence of technical testers in general, to write, review and maintain the automated tests. It is also unnecessary. Consider the following test case.

account first name last name addressopen account 1234567890 John Doe 11 test street, Testburgopen account 1234567891 Jane Doe 27 test lane, Testville

account amountdeposit 1234567890 10000

amount from account to accounttransfer 1234 1234567890 1234567891

account amountcheck balance 1234567890 8766check balance 1234567891 1234

Before going into the contents of the test case, please consider its form first. It consists of a number of instructions with varying numbers of arguments. The arguments are described in an optional comment line above each instruction. Tests in this column based format are easy to write, review and maintain. Any spreadsheet program can be used to create them. These programs are easy to use and some are even free. And only their most basic features are needed; the program serves as little more than a table editor.

Note that the basic test case is clear just from reading the instructions in the first column. This is one reason why instructions should start with a verb. To make sure they are easy to read, they should not be in CAPITALS or contain characters like underscores.

This is the format that the test engine of the Essential Test Automation Framework supports. This framework is freeware, available to anyone at no cost through www.DeAnalist.nl. Test instructions can be implemented in Java. This format will be used throughout this article. The use of colours is optional and just for improving readability.

Rule number two: Test analyst and automator cooperateWhen defining a DSTL, the test analyst, who knows the testing needs for the SUT and will be using the DSTL to write tests, should be in the lead. The automator should check that the proposed instructions can be realized. An experienced automator can also support less experienced test analysts, both with what the instructions look like and with effective use of the test engine functionality. So defining a DSTL is a task on which test analyst and test automator should work together.

The instructions that form the DSTL must be well documented to avoid confusion. The functionality of the instructions as described is what is available to the test analyst to write tests. The document also serves as a requirements document for the automator. Using a template like the one in appendix A helps ensure that all important aspects of the behaviour of an instruction are captured.

Note that, as the SUT evolves, new instructions may be needed and existing instructions may change. Again, both the test analyst and the automator should be involved in defining the new instructions. The document with the instruction definitions must be updated, so that it always reflects the current DSTL.

Rule number three: Use the natural abstraction levelComputers have very little difficulty processing, storing and searching through huge amounts of data very quickly. A human brain does not work that way. Its short term memory can hold only a few things at any one time. So a test case should be brief in order to be understandable to its writer, reviewer and maintainer. The main way to achieve this is by using a high abstraction level for the instructions of the DSTL.

Consider the first few lines from the test case above, repeated here.

account first name last name addressopen account 1234567890 John Doe 11 test street, Testburgopen account 1234567891 Jane Doe 27 test lane, Testville

Note that the 'open account' instruction is functional in nature: It describes what must be done rather than how it is done. The instruction may very well be a complex one, accessing multiple screens of the system. But unless such details are essential to understanding the test case, they can and probably should be left implicit.

A natural abstraction level is easily obtained by taking a candidate instruction and asking yourself, possibly repeatedly, 'Why do I do this?' Take entering the account number above into a GUI field, for example. The answer to the question 'why?' might be that it is required to complete the screen. The next question then focusses on why the screen is being filled out, the answer being: to open an account. As the reason that we are opening an account is because the test case asks for it, we are done and decide that 'open account' will be an instruction rather than 'enter account number.'

Rule number four: Avoid irrelevant interface detailsIf the focus of a test case is on a system interface, it is unavoidable that some details of this interface appear in the test case. In most test cases, however, interface details are not essential to understand what is being tested. Nor is the test case the only place where such details can be specified. Also, interface details in a test case often increase its

maintenance sensitivity significantly: It is often these details that change, break the test and make maintenance to the testware necessary. Irrelevant interface details are therefore best hidden in the implementation of the instructions.

Consider once more the 'open account' instruction. Note that it is not called 'use account screen,' 'call account service' or something else that even suggests what kind of interface the instruction addresses. Instructions that have the natural abstraction level, as suggested in rule number three, do not normally refer to the interface in any way, let alone to its details. If a test case does contain interface details that are not essential to understand it, this suggests that the level of abstraction of some instructions can be raised. Doing so will further reduce maintenance sensitivity of the test, making automated testing more effective and a more pleasant activity.

One additional advantage of hiding irrelevant interface details is that the same instruction can be implemented multiple times, in different ways. This is particularly useful if a system has multiple interfaces that support (more or less) the same functionality. An example would be a system that can be accessed both as a web application and through web services. With two implementations of the relevant instructions, the same test can be run against the same system using both interfaces. Another example is when different systems have similar functionality. A final example is when an interface is not yet available but another one is temporarily used instead, for instance when accessing a database directly makes automated testing possible while the application GUI is still being developed.

If interface details do need to be specified in the test, the proper place for them is normally in a configuration section of the test, not in the test cases. Changes to these details are then likely to be restricted to the configuration section and not affect the test cases at all. An example is the URL or IP address to connect to. It can change when moving to a new (test) environment and may be needed in many places. Defining it outside the test cases will avoid having to check all test cases for all uses of it. How to do this is explained under rule number six below.

Rule number five: Avoid irrelevant tooling detailsAs with interface details, any information about the tooling that is used to perform the test will make test cases harder to understand and more sensitive to maintenance. And as with the interface details, there are two places for tooling details that are much more suitable than the test cases: The first is the implementation of the instructions. Putting them here ensures that they will not bother the test analyst at all, but this may not be convenient for configuration items. These items could be read from other files by the instruction implementation, but separating configuration data from the test cases completely does not make maintenance any easier. They are best placed in a configuration section of the test.

There is one exception to this rule: The test engine that runs the test is also a tool and it does affect the test cases because it determines the format of the whole test. If a different test engine is ever selected, the whole test may have to be rewritten, perhaps including (part of) the instructions. Until a standard for such tools is defined, this is hard to avoid. Fortunately, it is unlikely to become an issue if a good test engine is used (like the one in the ETA Framework).

Rule number six: Give names to valuesMany of the things that are most susceptible to change in a test case take the form of a single value, like a URL, an IP address, etc. Having such values appear in many places

throughout a test is a maintenance nightmare just waiting to happen. Take for example the case where a URL is used like this in many places in a test.

urlopen URL http://server17:1001

When the URL that is used changes, every location where it is used must be identified and the URL value updated. This is boring and can be cumbersome. Even worse could be the possibility that some occurrences are missed, which becomes more likely as the task is considered more annoying. It can be quite difficult, for instance, to find the reason for a failing test case when the URL looks correct at first glance but in fact points to an old, outdated but still existing system. The chances of something like this happening increase if the URL itself is not too meaningful to the reader, even more so if several URLs are in use at the same time and they look similar. The URL for the system test environment and for the integration test environment may look alike, for example.

Such issues are easily avoided by assigning a name to a value and referring to it by this name only. Now consider this somewhat longer sample.

name valuedefine constant oldSystemTestUrl http://server07:1001define constant systemTestUrl http://server17:1001define constant integrationTestUrl http://server14:1001define constant url ?systemTestUrl

urlopen URL ?url

The 'define constant' instruction is a built-in instruction of the ETA Framework, meant for exactly this kind of situation. It introduces a name in the test that can be referred to anywhere after that point in the test. The way to refer to a constant is by prefixing the name with a question mark, as has been done twice in the above fragment. The first three constant definitions define the available URLs. The last one selects the one that applies at this time. The 'url' constant can and should be used throughout the test. Switching to the integration test URL is as simple as replacing '?systemTestUrl' with '?integrationTestUrl' in one place only. The constant definitions would normally be placed in the configuration section of the test, so all such settings can be maintained together.

Maintenance sensitivity of the test is significantly reduced and its maintainability increased by naming values. Even if a value is used only once in the test, giving it a meaningful name first makes a lot of sense. The readability of the test increases, which makes it easier to understand and thus easier to review and maintain. Applying this rule to our banking test case gives the following version.

name valuedefine constant johnsAccount 1234567890define constant janesAccount 1234567891

account first name last name addressopen account ?johnsAccount John Doe 11 test street, Testburgopen account ?janesAccount Jane Doe 27 test lane, Testville

account amountdeposit ?johnsAccount 10000

amount from account to accounttransfer 1234 ?johnsAccount ?janesAccount

account amountcheck balance ?johnsAccount 8766check balance ?janesAccount 1234

All references to the account numbers have now been changed into references to the constants that represent the account numbers. The result is a test case that is easier to interpret than before, and changing an account number only needs to be done in one place.

ConclusionTo get a sustainable test automation solution with no dependence on highly technical testers, focus on:

maximizing ease of writing, reviewing and maintaining tests and minimizing the amount of maintenance to all testware.

This article has discussed six rules that go a long way in achieving these goals:1. Support tester friendly tests.2. Test analyst and automator cooperate.3. Use the natural abstraction level.4. Avoid irrelevant interface details.5. Avoid irrelevant tooling details.6. Give names to values.

A suitable test engine like the one in the ETA Framework takes care of rule number one and offers additional features that support applying rule number six. The rest requires human intelligence and skills. So effective test automation is very well feasible.

Comments on this article and questions are welcome at [email protected].

Implementing the DSTL using the ETA FrameworkThe Essential Test Automation Framework offers many features that make it easy for a Java developer to implement DSTL instructions. The four instructions from the examples have been implemented for your convenience. The source code is available in the ETA Framework package and is also provided in appendix B. The result of running the test with this implementation of the instructions can be found in appendix C.

Appendix A: A template for describing instructionsThis template can be used to specify the behaviour of an instruction in detail.

Name and aliases The names under which the instruction is available.Description Describe what the instruction is used for.Parameters The parameters and their meaning. Indicate what values are

accepted ('yes' or 'no', integers between 17 and 23?) and if it is mandatory or optional.

Pre-condition What is required for the instruction to succeed.Post-condition What will be after the instruction succeeds.Error situations When can go wrong and what will happen then.Example Either an example of the usage of the instruction or a reference to

another instruction (with an example that contains this instruction).

The below example instruction is for a built-in instruction of the ETA Framework itself.

Name and aliases 'begin test case' and 'begin testcase'.Description Indicates the beginning of a (new) test case.Parameters

1 The identification of the test case, preferably unique within the test. Optional.

2 The test case description. Optional.Pre-condition None.Post-condition If the previous input line was part of a test case, that test case is

closed. A new test case is opened and assigned the next sequence number for the report(s), starting at one.

Error situations Is not executed but generates an error in a procedure definition.Example ...

Appendix B: The demo source code for the ETA FrameworkThis appendix contains the source code for the four instructions in the example test case, plus the source code for the library that registers these new instructions with the ETA Framework and defines the main() method. These files, together with the .jar files that the framework engine requires to run, form a complete test solution: It runs the sample test. Since the instructions were for a hypothetical system, the logic that actually executes the instructions and addresses the interface(s) of the SUT could not be created. Each instruction simply writes a comment to the test report saying “executed”.

In the source code, the marked sections show: That an instruction class derives from org.etaFramework.Instruction, How the checkMandatoryArgument() method from the base class is used to validate

each instruction argument and to report on invalid ones using a description, How org.etaFramework.validation.Validators.cDefaultValidator can be supplied to

checkMandatoryArgument() if no validation is required, and How a simple check on the values returned by checkMandatoryArgument() ensures

that all arguments are indeed valid before the real instruction logic is invoked.

The instruction library defines the main() method and registers the instructions with the test run object. It is also a convenient place to define the validators that the instructions of this library use to validate their arguments.

Note that checking the arguments is not required. It just makes sure that invalid arguments result in a meaningful error message. Not performing such checks means less code to write, but it can also mean spending (much) more time to figure out what is wrong and where the error was introduced.

The ETA Framework distribution package contains documentation on all of its features, including the ones used in this article.

OpenAccountInstruction.java:package org.etaFramework.demo1;

import org.etaFramework.ITestLine;import org.etaFramework.Instruction;import org.etaFramework.validation.Validators;

class OpenAccountInstruction extends Instruction {OpenAccountInstruction () {

super ();}

public void execute (final ITestLine testLine) {// validate the argumentsfinal String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number");final String firstName = checkMandatoryArgument (

testLine, 1, Validators.cDefaultValidator, "first name");final String lastName = checkMandatoryArgument (

testLine, 2, Validators.cDefaultValidator, "last name");final String address = checkMandatoryArgument (

testLine, 3, Validators.cDefaultValidator, "address");

// only execute the instruction if all arguments are validif (accountNr != null && firstName != null && lastName != null &&

address != null) {reportComment ("executed");

}}

}

depositInstruction.java:package org.etaFramework.demo1;

import org.etaFramework.ITestLine;import org.etaFramework.Instruction;

class DepositInstruction extends Instruction {DepositInstruction () {

super ();}

public void execute (final ITestLine testLine) {// validate the argumentsfinal String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number");final String amount = checkMandatoryArgument (

testLine, 1, DemoLibrary.cAmountValidator, "amount");

// only execute the instruction if all arguments are validif (accountNr != null && amount != null) {

reportComment ("executed");}

}}

TransferInstruction.java:package org.etaFramework.demo1;

import org.etaFramework.ITestLine;import org.etaFramework.Instruction;

class TransferInstruction extends Instruction {TransferInstruction () {

super ();} // TransferInstruction ()

public void execute (final ITestLine testLine) {// validate the arguments

final String amount = checkMandatoryArgument (testLine, 0, DemoLibrary.cAmountValidator, "amount");

final String fromAccountNr = checkMandatoryArgument (testLine, 1, DemoLibrary.cAccountNrValidator, "from account number");

final String toAccountNr = checkMandatoryArgument (testLine, 2, DemoLibrary.cAccountNrValidator, "to account number");

// only execute the instruction if all arguments are validif (amount != null && fromAccountNr != null && toAccountNr != null) {

reportComment ("executed");}

}}

CheckBalanceInstruction.java:package org.etaFramework.demo1;

import org.etaFramework.ITestLine;import org.etaFramework.Instruction;

class CheckBalanceInstruction extends Instruction {CheckBalanceInstruction () {

super ();} // CheckBalanceInstruction ()

public void execute (final ITestLine testLine) {// validate the argumentsfinal String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number");final String amount = checkMandatoryArgument (

testLine, 1, DemoLibrary.cAmountValidator, "amount");

// only execute the instruction if all arguments are validif (accountNr != null && amount != null) {

reportComment ("executed");}

}}

DemoLibrary.java:package org.etaFramework.demo1;

import org.etaFramework.Framework;import org.etaFramework.IInstructionLibrary;import org.etaFramework.ITestRun;import org.etaFramework.Options;import org.etaFramework.validation.IValidator;import org.etaFramework.validation.PatternBasedValidator;

class DemoLibrary implements IInstructionLibrary {public static void main (final String[] args) {

// create an ITestRunfinal ITestRun testRun = Framework.createTestRun ();

// register the demo instruction librarytestRun.registerInstructionLibrary (new DemoLibrary ());

// create an options objectfinal Options options = new Options ();

// parse the argumentsoptions.parse (args);

// run the file specified in the optionstestRun.run (options);

}

public boolean registerInstructions (final ITestRun testRun) {testRun.registerInstruction ("open account", new OpenAccountInstruction ());testRun.registerInstruction ("deposit", new DepositInstruction ());testRun.registerInstruction ("transfer", new TransferInstruction ());testRun.registerInstruction ("check balance",

new CheckBalanceInstruction ());return true;

}

public void cleanup () {;

}

static IValidator cAccountNrValidator = new PatternBasedValidator ("\\d+");static IValidator cAmountValidator = new PatternBasedValidator ("\\d+(,\\d\\d)?");

}

Appendix C: The test reportThe implementation of the instructions that a test uses is all that is needed to run it. Since the source code in appendix A implements all instructions that the example test case uses, the test case can be run. Letting the engine of the Essential Test Automation Framework execute the test case results in the following test report, in the same column base format as the test is in.

The lines from the test are preceded by a line number in the report, so that they are easy to look up in the test. The text in blue indicates comments that were generated by either the framework or by the instructions.

start Sat Jun 13 12:45:56 CEST 2009end Sat Jun 13 12:45:57 CEST 2009duration 266 milliseconds

entering C:\Users\Martin\Documents\Workspaces\Article\ETA Framework demo.xls:Demo

3 account first name last name address

4 open account 1234567890 John Doe 11 test street, Testburgexecuted

5 open account 1234567891 Jane Doe 27 test lane, Testvilleexecuted

7 account amount

8 deposit 1234567890 10000executed

10 amount from account to account

11 transfer 1234 1234567890 1234567891executed

13 account amount

14 check balance 1234567890 8766executed

15 check balance 1234567891 1234executedleaving C:\Users\Martin\Documents\Workspaces\Article\ETA Framework demo.xls:Demo