32
i TESTBEDS 2015 – Lincoln, Nebraska – November 10 th A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques Domenico Amalfitano Nicola Amatucci Anna Rita Fasolino Porfirio Tramontana

A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

Embed Size (px)

Citation preview

Page 1: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

Domenico AmalfitanoNicola AmatucciAnna Rita FasolinoPorfirio Tramontana

Page 2: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Context and MotivationsContext

◦Dynamic online testing techniques◦Android GUI Testing

Motivation◦Objective and systematic

comparison of fully automatic testing techniques.

Page 3: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Dynamic online testing techniques According to Rothermel these techniques

are defined as:◦Dynamic event extraction-based GUI test case

generation techniquesIn these techniques:

◦Test cases are actually sequences of one or more GUI events that must be automatically fired on the application under test (AUT).

◦ Test cases are executed as soon as they are automatically defined at run-time. There is not a clear distinction between the steps of

Test Cases Generation and Test Cases Execution.

Page 4: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

A first classification proposalBy a preliminary analysis of the

literature it is possible to classify these techniques into two families:◦Random techniques◦Active Learning techniques

Page 5: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Random testing techniquesAutomatically exercise the

subject applications by firing pseudo-random sequences of events through their GUI.◦Usually exploited to perform stress

testing of the AUT.

Page 6: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Active Learning testing techniquesCombine together GUI testing and

model learning techniques. Try to learn a model of the GUI and

generate user event sequences on the basis of this model.◦Usually exploit systematic exploration

strategies of the GUI, such as the ones emulating well-known graph exploration algorithms like Breadth-first search (BFS), Depth-first search (DFS).

Page 7: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Performances of dynamic testing techniquesIntuitively the performances of these

techniques may be influenced by different choices, such as:◦ When the process must be stopped?◦How the next sequence of events to be fired is

selected?◦ In the case of active learning techniques, when

two GUIs can be considered as equivalent?◦Etc.

A way for comparing in a systematic manner the performances of different techniques is needed.

Page 8: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Proposed approachThe approach is based on a

generalized algorithm that abstracts the key aspects of dynamic GUI testing techniques.

The algorithm provides a conceptual framework that can be used to define and compare different GUI testing techniques.

Page 9: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

The proposed generic GUI testing technique

Page 10: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

The proposed generic GUI testing technique

It is iterative

At each iteration the termination condition is evaluated on the basis of a given termination criterion

Set of parameters requested by the algorithm

Page 11: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

The proposed generic GUI testing technique

Description strategy parameter mainlydetermines how the descriptions of two or more user interfaces must be compared in order to understand ifthey can be considered equivalent or not.

An abstraction of the current user interface is performed and the learned GUI model is updated.

Page 12: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

The proposed generic GUI testing technique

A sequence of user events that can be executed on the current GUI is planned according to the scheduling strategy parameter.

Page 13: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

The proposed generic GUI testing technique

The planned sequence of events is actually executed on the current GUI. How these events arefired is determined by the execution strategy parameter.

Page 14: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Testing techniques as instances of the proposed algorithm The generalized algorithm can

be used to describe testing techniques that vary as the values of the algorithm parameters vary.◦Instances of the proposed algorithm

are dynamic GUI testing techniques.◦For a given instance, each parameter

assumes a specific value.

Page 15: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Algorithm as a systematic comparison frameworkThe Algorithm can be exploited

as a framework for the systematic comparison of different GUI testing techniques.◦It can be exploited to understand

how the values assumed by one of the parameters can influence the performances of the technique. By varying one dimension and by fixing

the other ones.

Page 16: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

EvaluationWe showed how the conceptual framework

can be exploited for defining different fully automated testing techniques and systematically comparing their performances.

We intended to understand:1. How much the choice of the parameters

influences the performances of fully automated testing techniques.

2. How much the choice of the parameters influences the models inferred by Active Learning testing techniques.

Page 17: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Subject and metricsAs subject application we considered

the Trolly Android application. Performances evaluated as:

◦ Test adequacy in terms of: LOCs coverage and LOCs Coverage %

◦ The cost for the execution in terms of: # of Fired Events

The complexity of the inferred GUI trees evaluated in terms of:

# of Nodes, # of Edges, # of Leaves and Depth.

Page 18: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Framework ConfigurationsWe defined different values of

each parameter of the algorithm to implement different fully automated testing techniques.◦A single value for precondition and

execution strategy.

Page 19: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Framework Configurations cont’dThree values of Termination

Criterion:◦Nmg: all the visited GUIs have been

already encountered.◦MaxEv: a given maximum number

of events has been executed.◦Sat: a given number of parallel

executions of the algorithm covers the same set of LOCs.

Page 20: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Framework Configurations cont’d

A OW WFV NDescription Strategy

the current GUI is characterized just by the name of the Activity class that renders it.

the current GUI is characterized by the types of the widgets it includes.

like OW, but widgets are also described in terms of all their attributes

-

GUI Equivalence

two GUIs are considered equivalent if they are rendered by the same Activity class

two GUIs are considered equivalent if they have identical widgets descriptions.

two GUIs are considered equivalent if they have identical widgets descriptions.

This strategy assumes that two GUIs are never equivalent.

Four values of Description Strategy:

Page 21: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Example of OW and WFV The two GUIs

are considered as equivalent according to OW.

The two GUIs are not considered as equivalent according to WFV.

Page 22: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Framework Configurations cont’dThree values of Scheduling

Strategy:

◦BF: breadth first GUI navigation◦DF: depth first GUI navigation◦R: uniform random GUI navigation

Page 23: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Values Combinations By combining some of the values 9 fully automated testing

techniques were defined and implemented. Some combinations were not meaningful or not applicable.

◦ Nmg termination criterion can not be exploited in conjunction with the N description strategy.

◦ Sat termination criterion can be exploited only in conjunction with the R scheduling strategy.

Page 24: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Performance Results

Page 25: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Performance ResultsRandom technique obtained better test adequacy results than the Active Learning one.

Page 26: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Performance ResultsThe description strategies affect the performances of the Active Learning testing techniques.Compare T1– T2 – T3 and T4 – T5 – T6

Page 27: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Performance ResultsBF and DF do no affect the performance of the Active Learning testing techniques.Compare T1– T4, T2 – T5 and T3 – T6

Page 28: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

GUI tree Complexity Results

Complexity is mainly affected only by the description strategies.Compare T1– T2 – T3 and T4 – T5 – T6

Page 29: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

About inferred GUI treesTechniques T1 (based on BF) and T4 (based on DF)

learned GUI trees having the same number of nodes, edges and leaves, but T4 inferred a deeper GUI tree.◦ Inferred GUI trees have different topology.

GUI Tree inferred by T1

GUI Tree inferred by T4

Page 30: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

ConclusionsA conceptual framework that can

be exploited for describing and implementing different fully automated GUI testing techniques was proposed.

The framework can be adopted for a systematic comparison of different testing techniques.

Page 31: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Future WorksTo extend and validate the

framework on the basis of the different techniques proposed in the literature, in order to discover additional or alternative parameters

To exploit the framework for designing an empirical study aiming at systematically comparing the performance of different techniques

Page 32: A Conceptual Framework for the Comparison of Fully Automated GUI Testing Techniques

i

TESTBEDS 2015 – Lincoln, Nebraska – November 10th

Thanks for your attention

Questions?

Further Information:http://reverse.dieti.unina.it

@REvERSE_UNINA

[email protected]