17
Automated Evaluation of Runtime Object States Against Model-Level States for State-Based Test Execution Frank(Weifeng) Xu, Gannon University Dianxiang Xu, North Dakota State University

Frank(Weifeng) Xu, Gannon University Dianxiang Xu, North Dakota State University

  • Upload
    eshana

  • View
    67

  • Download
    0

Embed Size (px)

DESCRIPTION

Automated Evaluation of Runtime Object States Against Model-Level States for State-Based Test Execution. Frank(Weifeng) Xu, Gannon University Dianxiang Xu, North Dakota State University. Overview. Introduction Objectives State evaluation infrastructure Case study Experiments/Demo - PowerPoint PPT Presentation

Citation preview

Page 1: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Automated Evaluation of Runtime Object States Against Model-Level States

for State-Based Test Execution

Frank(Weifeng) Xu, Gannon UniversityDianxiang Xu, North Dakota State University

Page 2: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Overview Introduction Objectives State evaluation infrastructure Case study Experiments/Demo Conclusions

Page 3: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Introduction State- based testing process Evaluation of runtime object states against

the model-level states defined in a state model is critical to state-based test automation. Manually keep track of the state of the running

objects is difficult Mapping from runtime object states to abstract

states is time consuming

Page 4: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Objectives This paper presents a state evaluation

framework to support the automated state-based test execution process. keeping track of the state of running objects mapping the states to abstract states firing a warning message if the abstract state does

not match model level states

Page 5: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Challenges How does the evaluation framework monitor

and collect the states of running objects? How does the monitor device interact with yet

not depend on a particular IUT? How can we automatically map the runtime

object states to abstract states in a state model?

How can the test driver get informed if the actual state and the expected state do not match?

Page 6: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Approach We take advantage of the pointcut mechanism

in aspect-oriented programming and implement the framework in AspectJ.

We demonstrate the framework by a case study We conduct a series of empirical studies to

evaluate the framework by calculating and comparing the total consumed minutes of mapping states and checking the oracle in term of manual and automated execution.

The experiment results show that the evaluation framework is much more effective and efficient than manual evaluation.

Page 7: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

State evaluation infrastructure

Page 8: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University
Page 9: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Case study

Page 10: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Expected state

We need run time objects

Page 11: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University
Page 12: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

pointcut setter(State s): call(void State.set*(*)) && target(s); /** * Advice to get the property change event fired when the setters are called. It's around advice because you need the old value of the property */ after(State s): setter(s) { //get the name of the property. Each state is stored in one //property String propertyName = thisJoinPointStaticPart. getSignature().getName().substring("set".length()); //get expected states String expectedState = s.getState(); //get current states String currentState = (String) ht.get(expectedState.substring(0, expectedState.indexOf("."))) //fire the event if those states are different firePropertyChange(s, propertyName, currentstate, expectedState); }

Figure 7. The pseudo code of a JavaBean fires events if the expected and actual states are different

Page 13: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Experiments Two groups of students

Group 1 students manually monitor and map states

Group 2 student use the framework 5 application

Page 14: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University
Page 15: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University
Page 16: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Conclusions We have proposed a novel approach to automatically

evaluation runtime object states to against model-level states for State-Based Test Execution. The framework is able to automatically keep track of the

properties of running objects, converting the properties to corresponding states in state

models and comparing whether or not the states match the expected states.

The framework is essentially an extension of observer design pattern implemented by enterprise JavaBeans. We take advantage of the pointcut mechanism of AspectJ

to facilitate the state monitoring ability.

Page 17: Frank(Weifeng) Xu, Gannon  University Dianxiang  Xu, North Dakota State University

Discussion We are assuming that all the state models we

are using for testing execution are correct. We consider state abstractions but never

action abstractions (input/methods to call). This makes things easier because these would have to be concretized.