12
HAL Id: inria-00578337 https://hal.inria.fr/inria-00578337 Submitted on 21 Mar 2011 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A Framework for Evaluating Quality-Driven Self-Adaptive Software Systems Norha Villegas, Hausi Müller, Gabriel Tamura, Laurence Duchien, Rubby Casallas To cite this version: Norha Villegas, Hausi Müller, Gabriel Tamura, Laurence Duchien, Rubby Casallas. A Framework for Evaluating Quality-Driven Self-Adaptive Software Systems. SEAMS 2011, May 2011, Honolulu, Hawaii, United States. ACM, 1, pp.80-89, 2011, SEAMS ’11. <http://doi.acm.org/10.1145/1988008.1988020>. <10.1145/1988008.1988020>. <inria-00578337>

A Framework for Evaluating Quality-Driven Self-Adaptive ... · PDF fileA Framework for Evaluating Quality-Driven Self-Adaptive Software Systems Norha M. Villegas Hausi A. Müller Dept

  • Upload
    dohanh

  • View
    221

  • Download
    1

Embed Size (px)

Citation preview

HAL Id: inria-00578337https://hal.inria.fr/inria-00578337

Submitted on 21 Mar 2011

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

A Framework for Evaluating Quality-DrivenSelf-Adaptive Software Systems

Norha Villegas, Hausi Müller, Gabriel Tamura, Laurence Duchien, RubbyCasallas

To cite this version:Norha Villegas, Hausi Müller, Gabriel Tamura, Laurence Duchien, Rubby Casallas. AFramework for Evaluating Quality-Driven Self-Adaptive Software Systems. SEAMS 2011,May 2011, Honolulu, Hawaii, United States. ACM, 1, pp.80-89, 2011, SEAMS ’11.<http://doi.acm.org/10.1145/1988008.1988020>. <10.1145/1988008.1988020>. <inria-00578337>

A Framework for Evaluating Quality-Driven Self-AdaptiveSoftware Systems

Norha M. VillegasHausi A. Müller

Dept. of Computer ScienceUniversity of Victoria

Victoria, Canada{nvillega,hausi}@cs.uvic.ca

Gabriel TamuraLaurence Duchien

INRIA Lille-Nord EuropeUniversity of Lille 1

Lille, [email protected]

Rubby CasallasDept. of Computer Science

University of Los AndesBogotá, Colombia

[email protected]

ABSTRACTOver the past decade the dynamic capabilities of self-adaptivesoftware-intensive systems have proliferated and improvedsignificantly. To advance the field of self-adaptive and self-managing systems further and to leverage the benefits of self-adaptation, we need to develop methods and tools to assessand possibly certify adaptation properties of self-adaptivesystems, not only at design time but also, and especially, atrun-time. In this paper we propose a framework for eval-uating quality-driven self-adaptive software systems. Ourframework is based on a survey of self-adaptive system pa-pers and a set of adaptation properties derived from controltheory properties. We also establish a mapping betweenthese properties and software quality attributes. Thus, cor-responding software quality metrics can then be used to as-sess adaptation properties.

Categories and Subject DescriptorsD.2.8 [Software Engineering]: Metrics—complexity mea-sures, performance measures

KeywordsSoftware adaptation properties, software adaptation met-rics, assessment and evaluation of self-adaptive systems, soft-ware quality attributes, application of control theory, en-gineering of self-adaptive systems, run-time validation andverification

1. INTRODUCTIONOver the past decade, self-adaptation has increasingly be-

come a fundamental concern in the engineering of softwaresystems to reduce the high costs of software maintenanceand evolution and to regulate the satisfaction of functionaland extra-functional requirements under changing conditions.Even though adaptation mechanisms have been widely in-vestigated in the engineering of dynamic software systems,

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SEAMS ’11 Honolulu, Hawaii, USACopyright 20XX ACM X-XXXXX-XX-X/XX/XX ...$10.00.

their application to real problems is still limited due to alack of methods for validation and verification of complex,adaptive, nonlinear applications [25].

After an exhaustive analysis of self-adaptive approaches,we concluded that adaptation properties and the correspond-ing metrics are rarely identified or explicitly addressed inpapers dealing with the engineering of dynamic software sys-tems. Consequently, without explicit adaptation propertiesit is impossible to assess and certify adaptive system behav-ior. In light of this, evaluation techniques, such as run-timevalidation and verification, are needed to advance the field.

To leverage the capabilities of self-adaptive systems, it isnecessary to validate adaptation mechanisms to ensure thatself-adaptive software systems function properly and userscan trust them. To address this problem we propose a frame-work for evaluating self-adaptive systems, where adaptationproperties are specified explicitly and driven by quality at-tributes, such as those defined in [2]. Our framework pro-vides (i) a set of dimensions useful to classify self-adaptivesystems; (ii) a compendium of adaptation properties for con-trol loops (i.e., in terms of the controller and the managedsystem); (iii) a mapping of adaptation properties to qual-ity attributes to evaluate adaptation properties; and (iv) aset of quality metrics to evaluate adaptation properties andquality attributes.

To define adaptation properties for our framework, weanalyzed existing self-adaptive approaches and investigatedproperties used in control theory. We then established amapping between adaptation properties and software qual-ity attributes. Then, we identified a set of metrics used toevaluate software quality attributes. The mapping of adap-tation properties to quality attributes and their correspond-ing quality metrics constitute our framework. The actualevaluation of a self-adaptive system involves both the man-aged system (i.e., process) and the managing system (i.e.,controller).

Borrowing properties and metrics from control theory andre-interpreting them for software-intensive self-adaptive sys-tems is not a trivial task because the semantics of the con-cepts involved in adaptation for control theory are differ-ent than those for self-adaptive software. Moreover, exist-ing self-adaptive software approaches do not generally ad-dress adaptation properties explicitly. Yet another impor-tant challenge is that, in general, self-adaptive software sys-tems are nonlinear systems [11].

Metrics to evaluate feedback control systems depend onthe properties that result from the relationship between the

control objectives, the target system’s measured outputs,the disturbances affecting the system and how the targetsystem is considered in the adaptation strategy [10]. Ourre-interpretation results from the analysis of several repre-sentative approaches and strategies that have been proposedto achieve behavior modification in a managed system. Fromthe identified relationship and the analysis of these strategieswe identified two main dimensions to classify and evaluateself-adaptive software. These dimensions arise from the waythe strategy addresses (i) the managed system (i.e., the sys-tem to be controlled), and (ii) the controller itself. In themanaged system dimension, we identified two groups. In thefirst one, the control paradigm, the managed system’s behav-ior is modeled, evaluated and influenced without affecting itsinternal structure; in the second group, the software engi-neering or planning paradigm, it is the managed system’sstructure that is modeled and modified to influence the sys-tem’s behavior. In the controller dimension we identifiedfour types of control actions: (i) continuous signals that af-fect behavioral properties of the managed system; (ii) dis-crete operations that affect the computing infrastructure ofthe managed system; (iii) discrete operations that affect theprocesses of the managed system; and (iv) discrete opera-tions that affect the managed system’s software architecture.Hence, the nature of the adaptation strategy—structural orbehavioral—and the way how the managed system is affectedby the controller—control actions—define the classificationof adaptive systems in our analysis.Some previous papers have addressed the evaluation of

self-adaptive software. In [16], Meng proposed a mapping offundamental concepts from control theory to self-adaptivesoftware systems. In his vision, the fundamental proper-ties to evaluate self-adaptation are stability and robustness.These two properties are analyzed and characterized in termsof what they imply for the programming paradigms, archi-tectural styles, modeling paradigms, and software engineer-ing principles. However, his evaluation model is descriptiveand not being applied to any approach. In the taxonomyproposed by Salehie and Tahvildari, several representativeprojects addressing the adaptation of software systems weresurveyed in terms of a set of adaptation concerns: how,what, when and where [21]. They also proposed a hierarchi-cal view of self-* properties and discussed their relationshipwith quality factors of software systems [20]. However, thescope of their work did not include the identification of met-rics for evaluating self-adaptive software systems in light ofthe identified adaptation properties and quality attributes.Our contribution in this paper differs from the aforemen-

tioned in the following aspects. First, we provide a recon-ciled definition of a more comprehensive list of propertiesfound in control theory and contrast them in the analyzedapproaches of self-adaptive software systems. Second, noneof the studied contributions presents a comprehensive andunified list of adaptation properties applicable to softwaresystems. Third, we provide a valuable foundation to evalu-ate adaptation properties in terms of quality attributes andmetrics as widely practiced in the engineering of softwaresystems.The remainder of this paper is organized as follows. Sec-

tion 2 presents our proposed model to characterize and clas-sify self-adaptive software according to the dimensions men-tioned above. Section 3 presents the analysis of selectedadaptive systems based on the characterization model pre-

sented in Sect. 2. Section 4 presents a compendium of themetrics and properties found in the analyzed approacheswith their corresponding definitions. This section also presentsour proposed mapping between adaptive properties and qual-ity attributes and how this mapping can be used to evaluateadaptive systems. Section 5 discusses different aspects ofour analysis and some challenges to be addressed for theevaluation of self-adaptation. Finally, Section 6 concludesthe paper.

2. A MODEL TO CHARACTERIZE SELF-ADAPTIVE SOFTWARE

In this section we propose a model consisting of eight anal-ysis dimensions to characterize self-adaptive software. Thismodel constitutes a foundation for evaluating self-adaptivesystems. For each of the analysis dimensions, the model con-siders a set of standardized classification options. These op-tions resulted from combining classification attributes fromrecognized authoritative sources (e.g., Software EngineeringInstitute (SEI)) with those found in the set of papers that weanalyzed. For instance, the set of options for the analyzedquality attributes as observable adaptation properties wasidentified mainly using the taxonomy proposed by an SEIstudy [2]. This taxonomy provides a comprehensive char-acterization of software quality attributes, their concerns,factors that affect them and methods for their evaluation.

For each analysis dimension, we include the relevant op-tions available from control theory, as follows.

• Adaptation goal. This is the main reason or justifi-cation for the system or approach to be self-adaptive.Adaptation goals are usually defined through one ormore of the self-* properties, the preservation of spe-cific quality of service (QoS) properties, or the regula-tion of non-functional requirements in general.

• Reference inputs. The concrete and specific set ofvalue(s) and corresponding types that are used to spec-ify the state to be achieved and maintained in themanaged system by the adaptation mechanism, underchanging conditions of system execution. Reference in-puts are specified as (a) single reference values (e.g., aphysically or logically-measurable property); (b) someform of contract (e.g., quality of service (QoS), ser-vice level agreements (SLA), or service level objectives(SLO); (c) goal-policy-actions; (d) constraints defin-ing computational states (according to the particularproposed definition of state); or even (e) functional re-quirements (e.g., logical expressions as invariants orassertions, regular expressions).

• Measured outputs. The set of value(s) (and corre-sponding types) that are measured in the managedsystem. Naturally, as these measurements must becompared to the reference inputs to evaluate whetherthe desired state has been achieved, it should be pos-sible to find relationships between these inputs andoutputs. Furthermore, we consider two aspects on themeasured outputs: how they are specified and howmonitored. For the first aspect, the specification, theidentified options were (a) continuous domains for sin-gle variables or signals; (b) logical expressions or con-ditions for contract states; and (c) conditions express-ing states of system malfunction. On the other side,

the options for monitoring are (a) measurements onphysical properties from physical devices (e.g., CPUtemperature); (b) measurements on logical propertiesof computational elements (in software e.g., requestprocessing time; in hardware e.g., CPU load); and(c) measurements on external context conditions (e.g.,user’s profiles, weather conditions).

• Computed control actions. These are characterized,in the monitor, analyze, plan, execute, and knowledge(MAPE-K) loop context, and in particular by the na-ture of the output of the adaptation planner or con-troller [12]. These outputs affect the managed sys-tem to have the desired effect. The computed controlactions can be (a) continuous signals that affect be-havioral properties of the managed system; (b) dis-crete operations affecting the computing infrastruc-ture executing the managed system (e.g., host system’sbuffer allocation and resizing operations; modificationof process scheduling in the CPU); (c) discrete oper-ations that affect the processes of the managed sys-tem directly (e.g., processes-level service invocation,process execution operations—halt/resume, sleep/re-spawn/priority modification of processes); and (d) dis-crete operations affecting the managed system’s soft-ware architecture (e.g., managed system’s architecturereconfiguration operations). The nature of these out-puts is related to the extent of the intrusiveness ofthe adaptation mechanism with respect to the man-aged system and defines the extent of the adaptationmechanism with respect to exploiting the knowledgeof either the structure or the behavior of the managedsystem in the adaptation process.

• System structure. Self-adaptive systems have two well-defined subsystems (although possibly un-distinguishable):(i) the adaptation controller and (ii) the managed sys-tem. One reason for analyzing controller and man-aged system structures is to identify whether a givenapproach implements the adaptation controller embed-ded with the managed system. Another reason is toidentify the effect that the separation of concerns inthese two subsystems has in the achievement of theadaptation goal. The analyzed approaches can be groupedinto two sets: (i) those modeling the structure of themanaged system to influence its behavior by modify-ing its structure; and (ii) those modeling the managedsystem’s behavior to influence it directly. We considerthe behavior model and the structure model as part ofthe system’s structure. The identified options for thecontroller structure are variations of the MAPE-K loopwith either behavioral or structural models of the man-aged system: (a) feedback control, that is, a MAPE-Kstructure with a fixed adaptation controller (e.g., afixed set of transfer functions as a behavior model ofthe managed system); (b) adaptive control: a MAPEstructure extended with managed system’s referenceand identification models of behavior (e.g., tunable pa-rameters of controller for adaptive controllers—modelreference adaptive control (MRAC) or model identi-fication adaptive control (MIAC)); (c) reconfigurablecontrol: MAPE-K structure with modifiable controlleralgorithm (e.g., rule-based software architecture recon-figuration controller). For the target system structure,

the identified options are: (a) non-modifiable structure(e.g., monolithic system); and (b) modifiable struc-ture with/without reflection capabilities (e.g., reconfig-urable software components architecture). It is worthnoting that not all options for system structure can becombined with any options for computed control ac-tions. For instance, discrete operations affecting thecomputing infrastructure executing the managed sys-tem could be used to improve the performance of amonolithic system, whereas discrete operations affect-ing the managed system’s software architecture wouldnot make sense.

• Observable adaptation properties. By adaptation prop-erty we mean a quality (or characteristic) that is par-ticular to a specific adaptation approach or mecha-nism. A quality can be a specific attribute value ina given state or a characteristic response to a knownstimulus in a given context. Thus, observable adap-tation properties are properties that can be identifiedand measured in the adaptation process. Given thatwe distinguish between the controller and the managedsystem in any self-adaptive system, we analyze observ-able adaptation properties also in both, the controllerand the managed system. The identified observableproperties in the controller are (a) stability; (b) ac-curacy; (c) settling-time; (d) small-overshoot; (e) ro-bustness; (f) termination; (g) consistency (in the over-all system structure and behavior); (h) scalability; and(i) security. For the managed system, the identified ob-servable properties result from the adaptation process:(a) behavioral/functional invariants; (b) quality of ser-vice conditions, such as performance (latency, through-put, capacity); dependability (availability, reliability,maintainability, safety, confidentiality, integrity); secu-rity (confidentiality, integrity, availability); and safety(interaction complexity and coupling strength). Ourproposed definitions for these properties are given inSect. 4.2.

• Proposed evaluation. For the analyzed approaches, weused this element to identify the strategies proposedto evaluate themselves. Amongst the most used eval-uation mechanisms are the execution of tests in realor simulated execution platforms, and the illustrationwith example scenarios.

• Identified metrics and key-performance indicators (KPIs).This element was identified from the analyzed approaches,the definition of metrics and KPIs that were used tomeasure the adaptation’s variables of interest.

3. ANALYSIS OF SELF-ADAPTIVEAPPROACHES

To validate our model for evaluating self-adaptive systems,we analyzed over 20 published approaches dealing with suchsystems. Of course, developing the model and analyzing thesubject systems was an iterative process. The results of thisanalysis process are summarized in Tables 1 and 2 below.Note that two of the dimensions (i.e., columns AdaptationProperties and Metrics in Table 1) are detailed in Sect. 4.Table 1 presents the characterization of selected adaptive ap-proaches based on the evaluation model proposed in Sect. 2.

Table

1:Applyingthecharacteriza

tionmodel

toselected

adaptiv

eapproaches

Approach

Adaptatio

nGoal

Referen

ceInputs

Mea

sured

Outp

uts

Contro

lActio

ns

System

Stru

cture

Adaptatio

nProperties

Evaluatio

nMetrics

Applebyetal.

Oceano[1]

Self-m

anagement

Self-o

ptim

izatio

nContra

cts:

SLAs

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gcomputa

tional

infra

structu

re

Adaptiv

econtro

l/M

odifiable

structu

re(re

flectio

n)

Sta

bility

,settlin

gtim

e,sm

all

oversh

ootand

scalability

/QoS:D

ependability

(availa

b.,

mainta

inab.),

perfo

rmance(sc

alability

)

Based

on

settlin

gtim

eto

test

perfo

rmanceof

theadapta

tion

pro

cess

Activ

econnectio

ns

perserv

er,re

sponse

time,outp

ut

bandwidth

,th

rottle

rate,admissio

nra

te,activ

eserv

ers

Baresi

and

Guinea[3]

Self-re

covery

Contra

cts:

SLAs,

funct.

req.

(logical

and

regularexpres-

sions)

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

epro

cess

of

themanaged

syste

m

Adaptiv

econtro

l/Non-

modifiable

structu

re

None/Behaviora

l,QoS:D

ependability

(safety,

integrity

,availa

b.,

relia

b.)

Functio

naland

relia

bility

-based

tests

Relia

bility

>=0.95

(last

twohours)

Candeaetal.[4]

Self-re

covery

Contra

cts:

SLOs-

QoS

Malfu

nctio

nconditio

ns/

Logical

pro

pertie

sof

computa

tionalelements

Disc

r.opera

tions

affectin

gth

epro

cess

of

themanaged

syste

m

Adapt.

contro

l/M

odifiable

structu

re(re

flectio

n)

Small

oversh

oot,

settlin

gtim

e/QoS:D

ependability

(availa

b.)

Recovery

-based

tests

Availa

bility

,downtim

e(cf.

Table

5)

Card

ellin

ietal.

MOSES

[5]

QoS

preservatio

nContra

cts:

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

epro

cess

of

themanaged

syste

m

Reconfigura

ble

contro

l/M

odifiable

structu

re(re

flectio

n)

Accura

cy/QoS:P

erfo

rmance

(latency),

Dependability

(relia

b.,

cost).

Based

on

theaccura

cy

ofth

eadapt.

strategy

resp

.tim

e,exec.

cost,

relia

bility

Dowlin

gand

Cahill

KCompo-

nents

[6]

Self-m

anagement

Contra

cts:

SLOs-

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Reconfigura

ble

contro

l/M

odifiable

structu

re(re

flectio

n)

Robustn

ess,

scalability

/QoS:P

erfo

rmance

(thro

ughput,

capacity

)

None

Load

cost

of

components

Ehrig

etal.

[7]

Self-h

ealin

gGoalactio

ns

Malfu

nctio

nconditio

ns/

Logic.

pro

pertie

sofcomput.

elem.

Disc

r.opera

tions

affectin

gth

epro

cess

of

themanaged

syste

m

Adaptiv

econtro

l/Non-

modifiable

structu

re

Term

inatio

n/QoS:

Dependability

(relia

b.)

Form

alverifi

catio

nof

pro

pertie

s.Running

example

None

Floch

etal.

MADAM

[8]

QoS

preservatio

nSelf-c

onfigura

tion

Contra

cts:

SLAs-

SLOs-Q

oS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Adaptiv

econtro

l/M

odifiable

structu

re(re

flectio

n)

Scalability

/QoS:

Perfo

rmance,Dependability

(relia

b.,

mainta

inab.)

Sim

ulated

enviro

nmentto

evaluate

scalability

None

Garla

netal.

Rainbow

[9]

Self-re

pairin

gContra

cts:

SLAs-

SLOs-Q

oS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Adaptiv

econtro

l/M

odifiable

structu

re(re

flectio

n)

None/QoS:Perfo

rmance

(latency)

Runningexamplesto

evaluate

effectiv

eness

None

Kumaretal.

MW

are

[13]

Self-m

anagement

Self-c

onfigura

tion

Self-o

ptim

izatio

n

Contra

cts:

QoS;

polic

yactio

ns

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements,

extern

al

context

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Adaptiv

econtro

l/M

odifiable

structu

re(re

flectio

n)

Settlin

gtim

e,sm

all

oversh

oot/

QoS:P

erfo

rmance

(thro

ughput,

capacity

)

Executio

ntests

on

realscenario

sBusin

ess

utility

functio

n(cf.

Table

5)

Legeretal.

[14]

QoS

preservatio

nSelf-c

onfigura

tion

Constra

ints

defin-

ing

computa

tional

states

Malfu

nctio

nconditio

ns/

Logical

pro

pertie

sof

computa

tionalelements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Reconfigura

ble

contro

l/M

odifiable

structu

re(re

flectio

n)

Consiste

ncy(a

tom.,

isol.,

dura

b.)/

QoS:Dependability

(availa

b.,

relia

b.)

Runningexamplesto

evaluate

perfo

rmance

None

Mukhija

and

Glin

zCASA

[17]

QoS

preservatio

nSelf-c

onfigura

tion

Contra

cts:

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements,

extern

al

context

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Reconfigura

ble

contro

l/M

odifiable

structu

re(re

flectio

n)

Consiste

ncy/QoS:

Perfo

rmance

Runningexamplesto

evaluate

perfo

rmance

None

Parekh

et

al.

[18]

QoS

preservatio

nSingle

reference

value

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Contin

uoussig

nals

affectin

gbehaviora

lpro

pertie

s

Feedback

contro

l/Non-

modifiable

structu

re(m

ath

ematic

almodel)

Sta

bility

,sm

all

oversh

oot/

QoS:Perfo

rmance

(thro

ughput,

capacity

)

Runningexampleson

Lotu

sNotes

Offered

load

Sicard

etal.

[22]

Self-m

anagement

self-h

ealin

gConstra

ints

defin-

ing

computa

tional

states

Notexplic

itmonito

ring

phase

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Feedback

contro

l/M

odifiable

structu

re(re

flectio

n)

None/QoS:Dependability

(relia

b.,

availa

b.)

Sim

ulated

experim

ents

toevaluate

perfo

rmance

Availa

bility

(cf.

Table

5)

Solomon

et

al.

[23]

Self-o

ptim

izatio

nContra

cts:

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Adaptiv

econtro

l/M

odifiable

structu

re(re

flectio

n)

Accura

cy/QoS:Perfo

rmance

Runningexample

totest

accura

cy

None

Tamura

etal.

SCeSAM

E[24]

QoS

preservatio

nContra

cts:

SLOs-

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

emanaged

syste

m’s

soft.

arch

.

Reconfigura

ble

contro

l/M

odifiable

structu

re(re

flectio

n)

Term

inatio

n,

consiste

ncy/QoS

pro

pertie

s(contra

ct)

Form

alpro

pertie

spro

bed

with

theorems.

Runningexample

None

White

etal.

Auto

nomic

JBeans[26]

Self-m

anagement

Self-h

ealin

gSelf-o

ptim

izatio

nSelf-p

rotectio

n

Contra

cts:

SLOs-

QoS

SLOs/

Logicalpro

pertie

sofcomputa

tional

elements

Disc

r.opera

tions

affectin

gth

epro

cess

of

themanaged

syste

m

Adaptiv

econtro

l/Non-

modifiable

structu

re

Settlin

gtim

e/QoS:

Perfo

rmance(th

roughput),

Dependability

(availa

b.)

Empiric

alexperim

ents

topro

bedevelopment

effort

savings

Avera

geresp

onse

time

Table 2 summarizes the characterization and classificationof the studied self-adaptive systems.Self-adaptive approaches range from pure control theory

approaches to pure software engineering-based approacheswith many hybrid approaches in-between. In control theory-based approaches, control actions are continuous signals thataffect behavioral parameters of the managed system. Thestructure of the managed system in these approaches is gen-erally non-modifiable while its behavior is modeled mathe-matically [18]. In contrast, software engineering-based ap-proaches are characterized by implementing discrete controlactions that affect the managed system’s software architec-ture (i.e., the system structure). In these approaches theadaptation is supported by a model of the managed system’sstructure and reflection capabilities that allow the modifica-tion of the structure [1, 6, 8, 9, 13, 14, 17, 22, 24]. Inhybrid adaptive systems, control actions are generally dis-crete operations that affect either the computing infrastruc-ture executing the managed system or the set of processescomprising the managed system. Usually, the structure ofthe managed system is non-modifiable [3, 4, 5, 7, 26]. Itmight be useful to classify hybrid apporaches further. Forinstance, we classified the approach proposed by Solomonet al. [23] between hybrid and software engineering-basedapproaches, given that their control actions affect the archi-tecture of the managed system but the analyzer is based ona behavioral model of the managed system to decide whento adapt. These predictive mechanisms use control engineer-ing techniques (i.e., Kalman Filters) to estimate adaptationparameters that require outputs that are not measurable onthe actual managed system.According to our proposed spectrum, most approaches

were identified as software engineering-based and hybrid adap-tive systems. With respect to the adaptation goal, we didnot identify a relationship with our proposed spectrum. Thus,it is possible to address any of the adaptation goals alongthe entire spectrum. Concerning reference inputs, most ap-proaches use contracts as the way to specify reference valuesfor the adaptation goal and the corresponding measured out-puts. All approaches that explicitly addressed monitoringmonitor logical properties of computational elements (inter-nal context), while two of them take into account the exter-nal context [13, 17]. Regarding the controller structure, allapproaches, except [18] and [22] that implement a simplefeedback loop, implement either adaptive or reconfigurablecontrol. Finally, the most common evaluation mechanism isthe implementation of running examples based on simulatedenvironments.

4. MEASURING ADAPTATIONPROPERTIES

Our evaluation of a self-adaptive system has two aspects.The first one concerns the evaluation of desired propertiesfor the managed system. In our analysis we focused only ondesired properties that correspond to quality attributes ofsoftware systems. The second one relates to desired proper-ties of the controller of the adaptation process.In this section we present a set of properties and met-

rics useful to evaluate adaptation. For the identification ofdesired properties on the managed system, we based ouranalysis on the taxonomy of quality attributes for softwaresystems proposed by SEI researchers [2]. For properties re-

Table 2: Characterization Summary

Characteristic Count [List of Approaches]

Spectrum Classification

Control Engineering 1 [18]

Hybrid 5 [3, 4, 5, 7, 26]

Hybrid-Software 1 [23]

Software Engineering 9 [1, 6, 8, 9, 13, 14, 17, 22, 24]

Adaptation Goal Specification

Contract-based 14

Monitoring Mechanisms

Monitor internal context 15

Monitor external context 2 [13, 17]

Non specified 1 [22]

Controller’s Structure

Feedback control 2 [18, 22]

Adaptive control 9 [1, 3, 4, 7, 8, 9, 13, 23, 26]

Reconfigurable Control 4 [5, 6, 14, 17, 24]

Managed System’s Structure

Non-modifiable 4 [3, 7, 18, 26]

Modifiable with reflection 12 [1, 4, 5, 6, 8, 9, 13, 14, 17, 22,23, 24]

Adaptation Properties

Settling time 4 [1, 4, 13, 26]

Small overshoot 4 [1, 4, 13, 18]

Scalability 3 [1, 6, 8]

Stability 2 [1, 18]

Accuracy 2 [5, 23]

Termination 2 [7, 24]

Consistency 3 [14, 17, 24]

Robustness 1 [6]

Security 0

Quality Attributes

Performance 10 [1, 5, 6, 8, 9, 13, 17, 18, 23,26]

Dependability 7 [1, 3, 4, 5, 7, 8, 26]

lated to the controller, we based our analysis on the SASOproperties identified by Hellerstein et al. in the applicationof control theory to computing systems [10], and other prop-erties identified in self-adaptive software systems surveys [7,14, 16, 17].

Furthermore, we classified the identified adaptation prop-erties according to how and where they are observed. Con-cerning how they are observed, some properties can be evalu-ated using static verification techniques while others requiredynamic verification and run-time monitoring. We use theterm observed as some properties are difficult to measure, de-spite the fact that controllers are designed to preserve them[24]. With respect to where, properties can be evaluated onthe managed system or the controller. On the one hand,some properties to evaluate the controller are observable onthe controller itself or on both, the controller and the man-aged system; however, most properties can only be observedon the managed system. On the other hand, properties toevaluate the managed system are observable only on themanaged system. In both cases, the environment that canaffect the behavior of the controller or the managed systemis also a factor worth considering.

Based on Sect. 2 and the analysis presented in Sect. 3,

this section presents the foundations of a framework for theevaluation of self-adaptation, where properties observableon the managed system, either to evaluate the controller orthe managed system, can be evaluated in terms of quality at-tributes. After analyzing several approaches on self-adaptivesoftware systems and identifying adaptation properties, wepropose a process for evaluating self-adaptation with whichsoftware engineers should be able to (i) identify requiredadaptation goals (i.e., the quality attributes that drive theadaptation of the managed system); (ii) identify adaptationproperties to evaluate the controller, this must include theidentification of properties that are observable on the con-troller, the managed system, or both; (iii) map quality at-tributes used to evaluate the managed system to propertiesthat evaluate the controller but are observable on the man-aged system; and (iv) define metrics to evaluate propertiesobservable on the managed system and the controller.

4.1 Quality Attributes as Adaptation GoalsIf we intend to evaluate an adaptive software system we

need to identify the motivation to build it—the adaptationgoal. In general, adaptation can be motivated by the needof continued satisfaction of functional and regulation of non-functional requirements under changing context conditions.Nevertheless, as most analyzed contributions focus on non-functional factors, we based our analysis on software systemswhose adaptation goals are motivated by quality concerns.Moreover, characteristics of self-adaptive systems, such asself-configuring or self-optimizing, can be mapped to qual-ity attributes. Following this idea, Salehie and Tahvildaridiscussed the relationships between autonomic characteris-tics and quality factors such as the relationship between self-healing and reliability [20].Our main contribution is the application of quality at-

tributes to evaluate self-adaptive software systems, as qual-ity attributes are commonly used to evaluate desirable prop-erties on the managed system. More importantly, we pro-pose a mapping between quality factors and adaptation prop-erties. This in fact introduces a level of indirection for evalu-ating adaptation properties that are not directly observableon the controller. The quality attributes that we analyzedin the approaches are the ones introduced in Sect. 2. In thissubsection we present the definitions of the selected qualityattributes, as well as the citations of the analyzed contri-butions whose adaptation goals are related to these qualityattributes.

• Performance. Characterizes the timeliness of servicesdelivered by the system. It refers to responsiveness,that means the time required for the system to respondto events or the event processing rate in an interval oftime. Identified factors that affect performance arelatency, the time the system takes to respond to a spe-cific event [5, 9]; throughput, the number of events thatcan be completed in a given time interval—beyond pro-cessing rate as the desired throughput must also beobserved in time sub-intervals [6, 13, 18, 26]; and ca-pacity, a measure of the amount of work the systemcan perform [6, 13, 18].

• Dependability. Defines the level of reliance that canjustifiably be placed on the services the software sys-tem delivers. Adaptation goals related to dependabil-ity are availability, readiness for usage [1, 3, 4, 14, 22,

26]; reliability, continuity of service [3, 7, 8, 13, 14, 22];maintainability, capacity to self-repair and evolve [1,4, 8, 22]; safety (from a dependability point of view),non-occurrence of catastrophic consequences from anexternal perspective (on the environment) [3]; confi-dentiality, freedom from unauthorized disclosure of in-formation; integrity, non-improper alterations of thesystem structure, data and behavior [3].

• Security. The selected concerns of the security at-tribute are confidentiality, protection from disclosure;protection from unauthorized modification, integrity;and availability, protection from destruction [3].

• Safety. The level of reliance that can justifiably beplaced on the software system as not generator of ac-cidents. Safety is concerned with the occurrence ofaccidents, defined in terms of external consequences.The taxonomy presented in [2] includes two proper-ties of critical systems that can be used as indicatorsof system safety: interaction complexity and couplingstrength. In particular, interaction complexity is theextent to which the behavior of one component canaffect the behavior of other components. SEI’s tax-onomy presents detailed definitions and indicators forthese two properties.

4.2 Adaptation PropertiesAn important part of our contribution is the identifica-

tion of adaptation properties that have been used for theanalyzed spectrum of adaptive systems, from control theoryto software engineering, to evaluate the adaptation process.The identified adaptation properties are stated as follows.The first four properties, called SASO properties, correspondto desired properties of controllers from a control theoryperspective [10]; note that the stability property has beenwidely applied in adaptation control from a software engi-neering perspective. The remaining properties in the listwere identified from hybrid approaches. Citations attachedto each property refer either to papers where the propertywas defined or to examples of adaptive systems where theproperty is observed in the adaptation process.

• Stability. The degree in that the adaptation processwill converge toward the control objective. An unsta-ble adaptation will indefinitely repeat the process withthe risk of not improving or even degrade the managedsystem to unacceptable or dangerous levels. In a sta-ble system responses to a bounded input are boundedto a desirable range [1, 8, 15, 16, 18].

• Accuracy. This property is essential to ensure thatadaptation goals are met, within given tolerances. Ac-curacy must be measured in terms of how close themanaged system approximates to the desired state (e.g.,reference input values for quality attributes) [5, 23].

• Short settling time. The time required for the adaptivesystem to achieve the desired state. The settling timerepresents how fast the system adapts or reaches thedesired state. Long settling times can bring the systemto unstable states. This property is commonly referredto as recovery time, reaction time, or healing time [4,10, 13, 15, 16].

• Small overshoot. The utilization of computational re-sources during the adaptation process. Managing re-source overshoot is important to avoid the system in-stability. This property provides information abouthow well the adaptation performs under given conditions—the amount of excess resources required to perform theadaptation [1, 4, 13, 15, 18].

• Robustness. The managed system must remain sta-ble and guarantee accuracy, short settling time, andsmall overshoot even if the managed system state dif-fers from the expected state in some measured way.The adaptation process is robust if the controller isable to operate within desired limits even under un-foreseen conditions [6, 16].

• Termination (of the adaptation process). In softwareengineering approaches, the planner in the MAPE-Kloop produces, for instance, discrete controlling actionsto adapt the managed system (cf. Sect. 2), such as alist of component-based architecture operations. Thetermination property guarantees that this list is finiteand its execution will finish, even if the system doesnot reach the desired state. Termination is relatedto deadlock freeness, meaning that, for instance, a re-configurable adaptation process must avoid adaptationrules with deadlocks among them [7, 24].

• Consistency. This property aims at ensuring the struc-tural and behavioral integrity of the managed systemafter performing an adaptation process. For instance,when a controller bases the adaptation plan on dy-namic reconfiguration of software architectures, consis-tency concerns are to guarantee sound interface bind-ings between components (e.g., component-based struc-tural compliance) and to ensure that when a compo-nent is replaced dynamically by another one, the ex-ecution must continue without affecting the functionof the removed component. These concerns help pro-tect the application from reaching inconsistent statesas a result of dynamic recomposition [17]. Leger et al.define this property to complete the atomicity, con-sistency, isolation and durability (ACID) propertiesfound in transactional systems that guarantee trans-actions are processed reliably [14] :

– Atomicity. Either the system is adapted and theadaptation process finishes successfully or it is notfinished and the adaptation process aborts. If anadaptation process fails, the system is returned toa previous consistent state.

– Isolation. Adaptation processes are executed asif they were independent. Results of unfinishedadaptation processes are not visible to others untilthe process finishes. Results of aborted or failedadaptation processes are discarded.

– Durability. The results of a finished adaptationprocess are permanent: once an adaptation pro-cess finishes successfully, the new system state ismade persistent. In case of major failures (e.g.,hardware failures), the system state can be recov-ered.

• Scalability. The capability of a controller to supportincreasing demands of work with sustained performance

using additional computing resources. For instance,scalability is an important property for the controllerwhen it must evaluate an increased number of condi-tions in the analysis of context. As computational effi-ciency is relevant for guaranteeing performance prop-erties in the controller, scalable controllers are requiredto avoid the degradation of any of the operations of theadaptive process in any situation [1, 6, 8].

• Security. In a secure adaptation process, not only thetarget system but also the data and components sharedwith the controller are required to be protected fromdisclosure (confidentiality), modification (integrity), anddestruction (availability) [2].

Table 3: Classification of adaptation properties according tohow and where they are observed.

Property Where the

Adaptation verification property is

properties mechanism observed

Stability Dynamic Managed system

Accuracy Dynamic Managed system

Settling Time Dynamic Managed system

Small Overshoot Dynamic Managed system

Robustness Dynamic Controller

Termination Both Controller

Consistency Static Managed system

Scalability Dynamic Controller

Security Dynamic Controller

4.3 Mapping Adaptation Properties andQuality Attributes

Once the adaptation goal and adaptation properties havebeen identified, the following step maps the properties ofthe controller, which are observable at the managed system,to the quality attributes of the managed system. Table 4presents a general mapping between adaptation propertiesand quality attributes. These quality attributes refer to at-tributes of both the controller and the managed system de-pending on where the corresponding adaptation propertiesare observed.

According to Tables 3 and 4, SASO properties, includingstability, can be verified at run-time by observing perfor-mance, dependability and security factors in the managedsystem. Stability is one of the adaptation properties ad-dressed in Oceano, a dynamic resource allocation systemthat enables flexible SLAs in environments where peak loadsare an order of magnitude greater than in the normal steadystate. Quality attributes addressed in Oceano concern de-pendability (i.e., availability and maintainability), and per-formance (i.e., throughput and capacity—scalability) [1]. Inthe same way, the controller proposed by Parekh et al. toguarantee desirable performance levels (i.e., throughput andcapacity) is also concerned with stability as adaptation prop-erty [18]. They apply an integral control technique to con-struct a transfer function that describes the system and theway the behavior of the managed system is affected by thecontroller. Baresi and Guinea propose a self-recovery systemwhere service oriented architecture (SOA) business processesrecover from disruptions of functional and non-functional

Table 4: Mapping adaptation properties to quality at-tributes

Adaptation Properties Quality Attributes

Stability

Performance

Latency

Throughput

Capacity

DependabilitySafety

Integrity

Security Integrity

Accuracy Performance

Latency

Throughput

Capacity

Settling Time PerformanceLatency

Throughput

Small Overshoot Performance

Latency

Throughput

Capacity

Robustness DependabilityAvailability

Reliability

SafetyInteract. Complex.

Coupling Strength

Termination DependabilityReliability

Integrity

Consistency DependabilityMaintainability

Integrity

Scalability Performance

Latency

Throughput

Capacity

Security Security

Confidentiality

Integrity

Availability

requirements, to avoid catastrophic events (safety) and im-proper system state alterations (integrity), and to guaranteereadiness for service (availability) and correctness of service(reliability) [3].Accuracy is addressed by Cardellini et al. in the MOSES

framework using adaptation policies in the form of directivesto select the best implementation of the composite serviceaccording to a given scenario [5]. MOSES adapts chainsof service compositions based on service selection using amultiple service strategy. It has been tested with some ex-periments to observe the behavior of the adaptation strategyin terms of its accuracy (i.e.,how close the managed systemis to the adaptation goal). Multiple adaptation goals wereused for these tests. The self-optimizing mechanism for busi-ness processes proposed by Solomon et al. also deals withaccuracy as an adaptation property [23]. Their approach isbased on a simulation model to anticipate performance levelsand make decisions about the adaptation process. They de-veloped a tuning algorithm to keep the simulation model ac-curate. The algorithm compensates for the measurement ofactual service time to increase the accuracy of simulations bymodeling errors, probabilities and inter-arrival times. Thenit obtains the best estimate for these data such that a squareroot error between the simulated and measured metrics isminimized.Settling time and small overshoot are addressed as adap-

tation properties in Oceano [1], the framework for develop-ing autonomic Enterprise Java Bean (EJB) applications pro-posed by White et al. [26], and the self-recovery approach

based on microreboots proposed by Candea et al. [4]. InOceano settling time is measured in terms of the time re-quired for deploying a new processing node including theinstallation and reconfiguration of all applications and datarepositories. In White’s framework, settling time is evalu-ated in terms of the average response time required for auto-nomic EJBs to adapt. Finally, Candea’s approach applies arecursive strategy that reduces mean time to repair (MTTR)by means of recovering minimal subsets of a failed system’scomponents. If localized, minimal recovery is not enough,progressively larger subsets are recovered. In the control-based approach, to ensure SLOs proposed by Parekh et al.,small overshoot is addressed by avoiding that control values(e.g., MAXUSERS) are set to values that exceed their le-gal range. Root-locus analysis is used to predict the validvalues of the maximum number of users. They divided thevalid range into three regions to decide when control valuesreach undesirable levels. Based on empirical studies, theyanalyzed properties of the transfer function to predict thedesired range of values [18].

One of the addressed adaptation properties in the self-management approach for balancing system load proposedby Dowling and Cahill is robustness [6]. They aim to realizea robust controller by implementing adaptation an mech-anism via decentralized agents that eliminate centralizedpoints of failure.

Termination can be verified using static mechanisms. Ehriget al. proposed a self-healing mechanism for a traffic lightsystem to guarantee continuity of service (reliability) by self-recovering from predicted failures (integrity) [7]. In theiradaptive solution, termination is addressed by ensuring dead-lock-freeness in the managed system by statically checkingthe self-healing rules in such a way that the self-repairingmechanism never inter-blocks traffic lights in the same roadintersection.

Scalability is also an adaptive property in K-Components,the agent-based self-managing system proposed by Downlingand Cahill [6]. Scalability is addressed by evolving the self-management local rules of the agents. Another approachwhere scalability is addressed as an adaptation property isMadam, the middleware proposed by Floch et al. for en-abling model-based adaptation in mobile applications [8].Scalability is a concern in Madam for several reasons. First,its reasoning approach might result in a combinatorial explo-sion if all possible variants are evaluated; second, the per-formance of the system might be affected when reasoningon a set of a concurrently running applications competingfor the same set of resources. They proposed a controllerwhere each component (e.g., the adaptation manager) canbe replaced at run-time to experiment with different analysisapproaches for managing scalability.

Security was not addressed as an adaptation property byany of the self-adaptive systems analyzed. However, we pro-pose the use of SEI’s definition of security as a quality at-tribute and its corresponding quality factors to evaluate se-curity on the controller [2]. As presented in Table 3, securityof the controller should be evaluated independently of themanaged system. This means that ensuring security at themanaged system does not guarantee security with respect tothe adaptation mechanism.

4.4 Adaptation MetricsAdaptation metrics provide the way of evaluating adap-

tive systems with respect to particular concerns of the adap-tation process [16, 19]. Thus, metrics provide a measure toevaluate desirable properties. For instance, metrics to eval-uate control systems measure aspects concerning the SASOproperties (i.e., stability, accuracy, settling time, and smallovershoot).To characterize the evaluation of adaptive systems, we an-

alyzed the variety of self-adaptive software systems to iden-tify adaptation properties (i.e., at the managed system andthe controller) that were evaluated in terms of quality at-tributes (cf. Sects. 3 and 4.1). Just as the evaluation ofmost properties is impossible by observing the controller it-self, we propose the evaluation of these properties by meansof observing quality attributes at the managed system. Toidentify relevant metrics, we characterized a set of factorsthat affect the evaluation of quality attributes such as speed,memory usage, response time, processing rate, mean time tofailure, and mean time to repair [15, 2]. These factors arean essential part of the metrics used to evaluate propertiesof both the controller and properties of the managed sys-tem [19].The evaluation of MOSES, the QoS-driven framework pro-

posed by Cardellini et al. to adapt service-oriented busi-ness processes, is based on the following metrics to measureperformance and reliability. Expected response time (Ru),the average time needed to fulfill a request for a compositeservice; expected execution cost (Cu), the average price tobe paid for a user invocation of the composite service; andexpected reliability (Du), the logarithm of the probabilitythat the composite service completes its task for a user re-quest [5].For Oceano, the following metrics were defined to evalu-

ate dependability factors (e.g., availability) and performancefactors (e.g., scalability in terms of throughput and capac-ity) [1]: active connections per server—the average num-ber of active connections per normalized server across adomain; overall response time—the average time it takesfor any request to a given domain to be processed; outputbandwidth—the average number of outbound bytes per sec-ond per normalized server for a given domain; data baseresponse time—average time it takes for any request to agiven domain to be processed by the back-end data base;throttle rate (T )—a percentage of connections disallowedto pass through Oceano on a customer domain; admissionrate—the complement of the domain throttle rate (1 − T );and active servers—the number of active normalized-serverswhich service a given customer domain.Average response time is a common metric used to evalu-

ate performance in several adaptive approaches, such as theframework to develop autonomic EJB applications proposedby White et al. [26]. In K-Components, the self-adaptivecomponent model that enables the adaptation of softwarecomponents to optimize system performance, a load balanc-ing function on every adaptation contract uses a cost func-tion to calculate its internal load cost and the ability of itsneighbors to handle the load [6]. This cost function is de-fined as the addition of the advertised load cost and internalcost of the component (i.e., calculated as the estimated costto handle a particular load type).In the control-based approach proposed by Parekh et al.,

to achieve performance service level objectives the length ofthe queue of the in-progress client requests is the metric de-fined as control offered load, the load imposed on the server

by client requests [18]. Baresi and Guinea proposed a metricto control reliability on the adaptation of business processesbased on BPEL [3]. In their approach, reliability is calcu-lated as the number of times a specific method responds towithin two minutes over the total number of invocations.They also defined a KPI based on this metric such that re-liability must be greater that 95% over the past two hoursof operation.

In the adaptive middleware proposed by Kumar et al., abusiness value KPI is defined in terms of factors, such asthe priority of the user accessing the information, the timeof day the information is being accessed, and other aspectsthat determine how critical the information is to the en-terprise [13]. For this, they defined a utility function asa combination of some of these factors: utility(egj−k) =f(

∑dni,min(bni), bgj−k), where i|eni ∈ M(egj−k). The

business utility of each edge (egj−k), which represents datastreams between operators that perform data transforma-tions, is a function of the delay dni, the available bandwidthbni of the intervening network edges eni, and the requiredbandwidth bgj−k of the edge egj−k.

In the self-healing approach based on recursive microre-boots proposed by Candea et al., availability is evaluatedin terms of mean time to recover (MTTR) [4]. To evalu-ate the availability of the system they defined two metrics,availability (A = MTTF/(MTTF + MTTR)) and down-time of unavailability (U = MTTR/(MTTF + MTTR)),where MTTF is the mean time for a system or subsystemto fail (i.e., the reciprocal of reliability), MTTR is the meantime to recover, and A is a number between 0 and 1. U canbe approximated to MTTR/MTTF when MTTF is muchlarger than MTTR. Similarly, Sicard et al. define a metricfor availability in terms of MTTR [22].

Table 5: Metrics to evaluate quality attributes of a subjectself-adaptive system being analyzed

QualityAttributes

Approach Metrics

Performance

MOSES [5]Response Time (Ru)

Execution Cost (Cu)

Oceano [1]

Response Time

Output Bandwidth

Throttle Rate

Admission Rate

EJB Frame-work [26]

Response Time

K-Comp. [6] Load Cost

Parekh etal. [18]

Offered Load

Kumar etal. [13]

f(∑

dni,min(bni), bgj−k)

Dependability

MOSES [5] Expected Reliability (Du)

Oceano [1]Active Connections per Server

Active Servers

Baresi andGuinea [3]

ResponseFrequency/TimeUnit

Reboots [4]A=MTTF/(MTTF+MTTR)

U=MTTR/(MTTF+MTTR)

Sicard etal. [22]

A=MTTR

Table 5 summarizes our identified metrics to assess self-adaptive systems. Although these metrics are directly re-

lated to the measurement of quality factors, we expect thatthese metrics will be useful for evaluating adaptation prop-erties based on our proposed mapping between quality at-tributes and adaptation properties (cf. Sect. 4). The ap-proach by Reinecke et al. [19] supports our hypothesis. Theirmetric measures the ability of a self-adaptive system to adapt.They argue that adaptivity can be evaluated using a meta-metric named payoff which is defined in terms of perfor-mance metrics to measure the effectiveness of the adaptationprocess. That is, the optimal adaptive system is character-ized by the fact that its adaptation decisions are always op-timal (i.e., always yield the optimal payoff). To apply theirmetric it is necessary to (i) identify the adaptation tasks,(ii) define one or more performance metrics on these tasks(i.e., these metrics should reflect the contribution of thesetasks toward the adaptation goal), (iii) define a payoff met-ric in terms of the performance metrics, and (iv) to applythe metric by observing the performance of the system.

5. DISCUSSIONWe started the analysis phase for this work with 34 re-

search papers with different proposals for self-adaptive soft-ware systems published over the past decade. From thisset, 18 were filtered-out mainly because either they pre-sented very generic proposals (i.e., with non-measurable self-adaptive properties) or they did not include enough infor-mation in the paper for characterization purposes.From the analysis of the 16 remaining papers that we pre-

sented in the previous sections (and even considering the 18papers that we filtered out), it is worth noting the prevalentdifficulty—or the lack of awareness of—to identify metricsto evaluate self-adaptive software. Nonetheless, some ad-vances have been made based on concepts from control the-ory and the recognized importance of quality metrics andcorresponding measures as the basis for understanding andimproving processes. However, as we can conclude from ouranalysis, there are plenty of opportunities and challenges tobe addressed, even when we consider concepts more abstractthan metrics, such as the properties of self-adaptation. Inthe following we outline some of these opportunities andchallenges.First, most of the proposals focus only on self-adaptation

mechanisms, not addressing explicitly the level of achieve-ment of adaptation properties nor the adaptation proper-ties themselves. On the one hand, it is known that eventhough control theory has defined standardized propertiesthat a controller must realize (i.e., the SASO properties),self-adaptive software systems require additional properties,due to their discrete nature (e.g., termination). On the otherhand, it is clear from the discussion in previous sectionsthat quality attributes are a plausible option to evaluatesome adaptation properties. However, it will be necessary(i) to evaluate if our proposed set of self-adaptation proper-ties defined in Sect. 4.2 is general enough for self-adaptationmechanisms; (ii) to find standardized metrics to measureself-adaptation properties, which could be based on propos-als such as the one by Reinecke et al. [19]; and (iii) to ana-lyze whether the proposal of measuring adaptation proper-ties based on quality attributes is meaningful enough, and ifthey fulfill the conceptual definitions of corresponding prop-erties of, for instance, control theory.Second, the lack of awareness of adaptation properties as

a goal to be measured results in a lack of evaluation meth-

ods and metrics for these properties and for the adaptationmechanisms themselves. However, this trend could be re-versed by designing self-adaptive mechanisms with implicitcontrollable and measurable properties. For some verifiableproperties, this can be obtained, for instance, by developingnew or using existing formal models as a basis for the self-adaptation process. For adaptation mechanisms with mea-surable properties, one main challenge is to develop mathe-matical behavior models based on the architecture itself ofthe target computing systems to be controlled.

Third, without declared evaluation methods and metricsit is very difficult to compare and to reason about the en-gineering of self-adaptation; for instance, from our analysisit resulted not possible to identify any measurable relation-ship between the adaptation goals and the evaluation of theadaptation strategies as such. From the structure point ofview, it is clear that decoupling the controller from the man-aged system, with respect to evaluation, is a first criticalstep toward to be able to reason and control the proper-ties of dynamic self-adaptation. However, several questionsremain: does the system structure (i.e., controller and man-aged system) have any relationship with the quality of theadaptation approach? Do non-explicit controllers imply un-defined adaptation properties? From the behavior point ofview and considering the adaptation mechanism as a black-box, how do we compare managed system behavior in thedifferent phases of the adaptation process? Maybe in termsof stability and other properties such as settling time. Underwhich circumstances and characteristics of the managed sys-tem is an adaptation mechanism better than another? Thereis no available evaluation framework to compare adaptationmechanisms to help answer these questions.

6. CONCLUSIONSSelf-adaptive software evaluates and modifies its behav-

ior to preserve the satisfaction of its functional requirementsand regulation of non-functional requirements, under chang-ing context conditions of execution. Researchers devised andproposed many diverse approaches and strategies to modifythe behavior of a managed system. In this paper, as a resultof our analysis, we proposed a classification of self-adaptivesystems spanning the spectrum from control-based to soft-ware engineering-based approaches.

Many studied approaches did not identify nor addressadaptation properties. Thus, the evaluation of adaptivesystems is generally not addressed explicitly —neither inthe controller nor in the managed system. Consequently,since adaptation properties are not identified in many ap-proaches, metrics are not addressed either. Thus, validationmechanisms discussed for these approaches are usually lim-ited to the evaluation of performance properties observed inthe managed system, even when the adaptation goal is notrelated to performance quality attributes.

Future work will focus on the validation of the adaptationproperties and their mapping to quality attributes as pro-posed in this paper through evaluation of existing adaptivesystems.

AcknowledgmentsThis work was funded in part by the National Sciences andEngineering Research Council (NSERC) of Canada, IBMCorporation, CA Inc., Icesi University (Cali, Colombia), and

Ministry of Higher Education and Research of Nord-Pas deCalais Regional Council and FEDER under Contrat de Pro-jets Etat Region (CPER) 2007-2013.

7. REFERENCES[1] K. Appleby, S. Fakhouri, L. Fong, G. Goldszmidt,

M. Kalantar, S. Krishnakumar, D. P. Pazel,J. Pershing, and B. Rochwerger. Oceano - SLA basedmanagement of a computing utility. In Proceedings 7thIFIP/IEEE International Symposium on IntegratedNetwork Management, pages 855–868, 2001.

[2] M. Barbacci, M. H. Klein, T. A. Longstaff, and C. B.Weinstock. Quality attributes. Technical ReportCMU/SEI-95-TR-021, CMU/SEI, 1995.

[3] L. Baresi and S. Guinea. Self-supervising BPELprocesses. IEEE Transactions on SoftwareEngineering, 99(Preprint), 2010.

[4] G. Candea, J. Cutler, and A. Fox. Improvingavailability with recursive microreboots: a soft-statesystem case study. Performance Evaluation,56(1-4):213–248, 2004. Dependable Systems andNetworks - Performance and DependabilitySymposium (DSN-PDS) 2002: Selected Papers.

[5] V. Cardellini, E. Casalicchio, V. Grassi, F. Lo Presti,and R. Mirandola. QoS-driven runtime adaptation ofservice oriented architectures. In Proceedings 7th JointMeeting of the European Software EngineeringConference and the ACM SIGSOFT Symposium onThe Foundations of Software Engineering,(ESEC/FSE ’09), pages 131–140, New York, NY,USA, 2009. ACM.

[6] J. Dowling and V. Cahill. Self-managed decentralisedsystems using k-components and collaborativereinforcement learning. In Proceedings 1st ACMSIGSOFT Workshop on Self-Managed Systems,(WOSS ’04), pages 39–43, New York, NY, USA, 2004.ACM.

[7] H. Ehrig, C. Ermel, O. Runge, A. Bucchiarone, andP. Pelliccione. Formal analysis and verification ofself-healing systems. In Fundamental Approaches toSoftware Engineering, (FASE ’10), volume 6013 ofLNCS, pages 139–153. Springer, 2010.

[8] J. Floch, S. Hallsteinsen, E. Stav, F. Eliassen,K. Lund, and E. Gjorven. Using architecture modelsfor runtime adaptability. IEEE Software, 23:62–70,March 2006.

[9] D. Garlan, S.-W. Cheng, A.-C. Huang, B. Schmerl,and P. Steenkiste. Rainbow: Architecture-basedself-adaptation with reusable infrastructure. IEEEComputer, 37:46–54, 2004.

[10] J. L. Hellerstein, Y. Diao, S. Parekh, and D. M.Tilbury. Feedback Control of Computing Systems. JohnWiley & Sons, 2004.

[11] J. L. Hellerstein, S. Singhal, and Q. Wang. Researchchallenges in control engineering of computingsystems. IEEE Trans. on Network and ServiceManagement, 6(4):206–211, 2009.

[12] IBM Corporation. An architectural blueprint forautonomic computing. Technical report, IBMCorporation, 2006.

[13] V. Kumar, B. F. Cooper, Z. Cai, G. Eisenhauer, andK. Schwan. Middleware for enterprise scale data

stream management using utility-driven self-adaptiveinformation flows. Cluster Computing, 10:443–455,2007.

[14] M. Leger, T. Ledoux, and T. Coupaye. Reliabledynamic reconfigurations in a reflective componentmodel. In Proceedings 13th International Symposiumon Component Based Software Engineering, (CBSE’10), volume 6092 of LNCS, pages 74–92. Springer,2010.

[15] C. Lu, J. A. Stankovic, T. F. Abdelzaher, G. Tao,S. H. Son, and M. Marley. Performance specificationsand metrics for adaptive real-time systems. InReal-Time Systems Symposium, pages 13–23, 2000.

[16] A. C. Meng. On evaluating self-adaptive software. InProceedings 1st International Workshop onSelf-Adaptive Software, (IWSAS’ 00), pages 65–74,Secaucus, NJ, USA, 2000. Springer-Verlag New York.

[17] A. Mukhija and M. Glinz. Runtime adaptation ofapplications through dynamic recomposition ofcomponents. In Proceedings 18th InternationalConference on Architecture of Computing Systems,pages 124–138, 2005.

[18] S. Parekh, N. Gandhi, J. Hellerstein, D. Tilbury,T. Jayram, and J. Bigus. Using control theory toachieve service level objectives in performancemanagement. Real-Time Systems, 23:127–141, July2002.

[19] P. Reinecke, K. Wolter, and A. van Moorsel.Evaluating the adaptivity of computing systems.Performance Evaluation, 67(8):676–693, 2010. SpecialIssue on Software and Performance.

[20] M. Salehie and L. Tahvildari. Autonomic computing:emerging trends and open problems. SIGSOFTSoftware Engineering Notes, 30:1–7, May 2005.

[21] M. Salehie and L. Tahvildari. Self-adaptive software:Landscape and research challenges. ACM Transactionson Autonomous and Adaptive Systems, 4:14:1–14:42,May 2009.

[22] S. Sicard, F. Boyer, and N. De Palma. Usingcomponents for architecture-based management: theself-repair case. In Proceedings 30th InternationalConference on Software Engineering, ICSE ’08, pages101–110. ACM, 2008.

[23] A. Solomon, M. Litoiu, J. Benayon, and A. Lau.Business process adaptation on a tracked simulationmodel. In Proceedings 2010 Conference of the Centerfor Advanced Studies on Collaborative Research,(CASCON ’10). ACM, 2010.

[24] G. Tamura, R. Casallas, A. Cleve, and L. Duchien.Qos contract-aware reconfiguration of componentarchitectures using e-graphs. In Proceedings 7thInternational Workshop on Formal Aspects ofComponent Software, (FACS ’10). LNCS, 2010. Toappear.

[25] United States Air Force Chief Scientist (AF/ST).Technology Horizons a Vision for Air Force Science &Technology During 2010-2030. Technical report, U.S.Air Force, May 2010.

[26] J. White. Simplifying autonomic enterprise Java beanapplications via model-driven development: A casestudy. The Journal of Software and System Modeling,pages 601–615, 2005.