23
Theoretical Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ UNIVERSITY Department of Computing Science SE-901 87 UMEÅ SWEDEN

Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Embed Size (px)

Citation preview

Page 1: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Theoretical Validation and Case Study of Requirements Management Measures

Annabella Loconsole and Jürgen Börstler

UMINF-03.02 ISSN-0348-0542

UMEÅ UNIVERSITY Department of Computing Science

SE-901 87 UMEÅ SWEDEN

Page 2: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ
Page 3: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

tronisioglu

io

UMEÅ UNIVERSITYDepartment of Computing ScienceSE-901 87 UMEÅSWEDEN

Theoretical ValidatioRequirements Man

Annabella Loconsole

AbsRequirements management measures can help us to ccosts of changing requirements. The goal of this paper ures and validate these measures applying two definitstudy using these measures to estimate the cost of chanlect sufficient data to draw statistically significant conc

KeywordsRequirements Management, Measure ValidatValidation, Case Study.

n and Case Study of agement Measures

and Jürgen Börstler

acttrol changing software requirements and estimate the twofold. First we describe a set of requirements meas-ns from the literature. After that we describe a casees to requirements. Although we were not able to col-sions, we present some lessons learned.

n, Theoretical, Empirical, Internal, External

UMINF 03.02ISSN-0348-0542

2003-08-01

Page 4: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

AbstractRequirements management measures can help us to control changing soft-ware requirements and estimate the costs of changing requirements. Thegoal of this paper is twofold. First we describe a set of requirements meas-ures and validate these measures applying two definitions from the litera-ture. After that we describe a case study using these measures to estimatethe cost of changes to requirements. Although we were not able to collectsufficient data to draw statistically significant conclusions, we presentsome lessons learned.

KeywordsRequirements Management, Measure Validation, Theoretical,Empirical, Internal, External Validation, Case Study.

1. IntroductionCarefully developed software requirements are a key issue forproject success [28]. The cost of correcting an error after sys-tem delivery is orders of magnitude higher than the cost ofcorrecting a similar error during the requirements analysisphase [20].Since requirements change frequently, even during the devel-opment, it is important to control those changes to be ableanticipate and respond to change requests [23]. Requirementsdevelopment is a learning process rather than a gatheringprocess. Therefore, it is naïve to believe that we can specifyexactly what a customer wants at the beginning of a project.The best we can do is to carefully monitor and control allrequirements throughout the software life cycle. Requirements management is an activity performed in parallelwith requirements elicitation, analysis, documentation, andvalidation (see Figure 1). It is the process of monitoring anddocumenting changes to requirements, tracing these changingto related work products, and communicating this informa-tion across the project team. Well done requirements manage-ment practices ensure that unanticipated changes can becontrolled throughout the project life cycle. Without thesepractices high quality software is difficult to achieve.The management of the requirements is connected to otherareas of software engineering like software maintenance, con-

figuration management, and change management. Softwaremaintenance is triggered by change requests from customersor marketing requirements. A change request can be a changeto requirements and can be filed before product delivery.New versions of software systems are created as they change,and this makes the systems evolve. Configuration manage-ment involves the development and application of proceduresand standards to manage an evolving software system. Con-figuration management aims to control the costs and effortinvolved in making changes to a system. Requirements man-agement and configuration management are two importantkey process areas of the Capability Maturity Model for Soft-ware (SW-CMM) and they may be seen as part of a more gen-eral quality management process [19]. In a survey of 4000 European companies [27] it was foundthat the management of customer requirements was one ofthe main problem areas in software development.Software measurement can help us in providing guidance tothe requirements management activities by quantifyingchanges to requirements and in predicting the costs related tochanges. Numerous software measures for the requirements

requirements elicitation

requirements analysis

requirements documentation

requirements validation

requirements management

users needs domain informa-tion, existing sys-tem information,

regulations stand-ards, etc.....

agreed require-ments

require-ments doc-

ument

system specifica-

tions

Figure 1: Requirements Engineering Process (adaptedfrom [13]).

Theoretical Validation and Case Study of Requirements Management Measures

Annabella Loconsole, Jürgen BörstlerDepartment of Computing Science

Umeå UniversitySE-90187 Umeå

+46 (0)90 786 6372

{bella, jubo}@cs.umu.se

Page 5: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

management activities have been proposed in the literature(see [4] and [22]). However, few of these measures have beenvalidated. Furthermore, there are only few empirical studies inthis area.

In our previous work [14] we analysed the key practicesdefined in the Requirements Management Key Process Area(KPA) of the SW-CMM [19]. By means of the Goal QuestionMetrics (GQM) paradigm [1] we defined a total of 38 meas-ures, shown in table 1. Organisations are supposed to selectsuitable measures from this list, depending on their practices,

processes, and maturity. In general it is suggested to keep theset of data collected small [2]. Immature organisation havepoor visibility of the requirements management process or/and poorly defined requirements. Most of the measures in[14] are therefore not applicable. Immature organisationsshould document their requirements and after that they canstart counting the total # requirements1 and changes to thoserequirements to establish a baseline. Organisations at highermaturity levels will likely have established processes to handlechange requests. They can (and should) therefore collect fur-ther data, such as type of requirement (database, interface,performance requirement, etc.). In general the metrics collec-tion will vary with the maturity of the process. In this paper we will validate a subset of our set of 38 meas-ures, which will later be used in a case study performed in aclass project. Unfortunately, we were not able to collectenough data to draw statistical conclusions. However, somelessons learned are presented.The reminder of this paper is organised as follows. Sectiontwo introduces basic software measurement theory. Sectionthree summarises previous work on measures validation. Inthe fourth section we perform a theoretical validation of themeasures. In section five we describe a case study with thepurpose of showing that effort estimations using historicaldata are better than intuitive estimations.

2. Software MeasurementIn software measurement theory, an entity is an object, anattribute is a property of an entity, a measure is the number orsymbol assigned to an entity in order to characterise theattribute2. Software measures can be direct or indirect. Directmeasurement of an attribute of an entity involves no otherattribute or entity. Indirect measurement of an attribute of anentity involves other attributes or entities. Attributes can beinternal or external. Internal attributes are those that can bemeasured by examining the entity on its own, separately fromits behaviour. External attributes are measured by means ofother attributes. Software measurement leads to the followingadvantages:• increased understanding and control of software develop-ment process,• increased capacity to improve the software developmentprocess, • more accurate estimates of software projects cost andschedule, • more objective evaluation of changes in techniques, toolsand methods,

Table 1: The 38 Measures Defined in [14]

Requirements Management Measures

1.# affected groups and individuals informed about NOC2.# baselined requirements3.# changes per requirement4.# changes to requirements approved5.# changes to requirements incorporated into base line6.# changes to requirements open7.# changes to requirements per unit of time8.# changes to requirements proposed9.# changes to requirements rejected 10.# documents affected by a change11.# final requirements12.# incomplete requirements13.# inconsistencies14.# inconsistent requirements15.# initial requirements16.# missing requirements17.# requirements affected by a change18.# requirements scheduled for each software build or release19.# TBDs in requirements specifications20.# TBDs per unit of time21.# test cases per requirement22.Cost of change to requirements23.Effort expended on Requirements Management activity24.Kind of documentation25.Major source of request for a change to requirements26.Notification of Changes (NOC) shall be documented and distributed as a key communication document27.phase when requirements are baselined28.Phase where change was requested29.Reason of change to requirements30.Requirement type for each change to requirements31.Size of a change to requirements32.Size of requirements33.Status of each requirement34.Status of software plans, work products, and activities35.The computer software configuration item(s) (CSCI) affected by a change to requirements36.Time spent in upgrading 37.Total # Requirements 38.Type of change to requirements

1. From now on we will use the symbol # to denote “number of ”.2. See [10] for more detailed definitions of entity, attribute, measure, internal attributes, external attributes, scale, and representation con-dition.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 3

Page 6: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

• more accurate estimates of the effects of changes on projectcost and schedule, • decreased development costs due to increased productivityand efficiency, • improved customer satisfaction and confidence due tohigher productivity quality [29].• low cost of software measurement activities. The amount oftime spent by a software development team on a measure-ment program is about 2% of their time reduced to 1% whenthe team gets experienced.Although there are many advantages, measurement can alsolead to problems. • Honesty of measurement: human behaviour will bechanged. When measuring the effort to evaluate personnelproductivity, the developers tend to be vague to apply metricsand may report inaccurate data to avoid a reprimand.• Fear of measurement, resistance to measurement, and ethi-cal use of measures. These are social issue and software engi-neers are poorly equipped to deal with them [5].• Emphasis will be laid on what is being measured, what isnot being measured will gradually be ignored.Besides the disadvantages of software measurement, otherfactors should be kept in mind if we want to perform a suc-cessful measurement program: feedback sessions with peoplecollecting data and cost benefit analysis [29].

3. Measures ValidationSeveral definitions of measures validation are present in theliterature. The probably most recognised is the internal-exter-nal validation. Fenton and Pfleeger [10] define measure vali-dation as follows: “validating a software measure is theprocess of ensuring that the measure is a proper numericalcharacterisation of the claimed attribute by showing that therepresentation condition is satisfied”. This definition is alsoknown as internal validation or validation in the narrowsense. Internal validation of software measures is based on ahomomorphism between the empirical world and the numeri-cal world.Through the representation condition we demonstrate thatthe relationship between the measures and the internalattributes is valid in the empirical world. Suppose we want tomeasure the temperature of two glasses of water A and B. Inthe real or empirical world we feel the temperature and decidethat for example A is warmer than B. In the mathematicalworld we choose a numerical relation associated to the empir-ical relation “is warmer”. The numerical relation would in thiscase be “>”. Then we choose a temperature system (Celsius,Fahrenheit etc.) and finally we associate numbers to the tem-perature of each glass of water. Suppose that in the numericalworld TC(A) = 25 and TC(B) = 37, where TC(X) is the tem-perature in Celsius degree of X. The representation conditionwould not be satisfied because it does not correspond to what

happen in the real world.An example for requirements is the following: size(S) <size(T) if and only if #requirement(S) < #requirement(T)where S and T are two Software Requirements Specification(SRS) documents; size is the attribute of the entity require-ment and #requirements is the measure. As explained in [15],the relationship above can be proven only empirically. Oneway to internally validate a measure, is to follow the “keystages of formal measurement” [10], shown in Figure 2.External validation is done by showing that a measure is 1)internally valid and 2) a component of a prediction system[10]. The definition given by [30] of external validation statesthat we perform an external validation if we prove that anexternal attribute (or external variable) is a function of aninternal one. We prove that an external attribute X verifies theequation X=f(Y) where Y is an internal attribute. For instancecost = f(LOC) where “cost” is an external attribute of theentity “program” and LOC (Lines Of Code) is an internalattribute of the same entity. In general the connectionbetween external and internal attributes is built through a pre-diction model. However the distinction between internal,external attributes, direct and indirect measures is sometimesnot clear. According to [32], external attributes are mostlyindirect measures and internal attributes are mostly directmeasures.Predictive validity can be determined only by performing anempirical validation. When performing empirical validationhowever, the connection between internal attribute values andthe resulting external attribute values are seldom sufficientlyproven. As stated in [10], there are two reasons for this; 1) it isdifficult to perform controlled experiments and confirm rela-tionships between attributes and 2) there is still little under-standing of measurement validation and the proper ways todemonstrate these relationships. The same concepts of internal and external validation areused in the wider definitions of theoretical and empirical vali-dation [12]. Theoretical validation allows us to say whethera measure is valid with respect to some defined properties(the ones listed in Figure 2). The terms “theoretical valida-tion” are used in [12] in a broader sense, they include the vali-dation of the measurement instruments and of the datacollection procedures. In the next section we will apply thistheoretical validation but we will focus only on the propertiesof the entities, attributes, and measures.When we perform an empirical validation we verify thatmeasured values of attributes are consistent with values pre-dicted by models involving the attribute [12]. For examplemeasured values of cost are consistent with the valuesobtained by a function f(LOC). As stated in [15], it is not pos-sible to theoretically validate a measure without performingan empirical study, because the representation condition can

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 4

Page 7: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

be only proven empirically [10].Briand et al. ([5], [6]) support theoretical and empirical valida-tion. Theoretical validation is based on the construction of anempirical relational system and a formal relational system.The properties of the empirical system should be preservedby the formal system if the measure is valid. Their definitionof theoretical validation is taken from the classical definitionin measurement theory. Empirical validation is based on theproof that internal attributes are connected to externalattributes. In other words with empirical validation we provethat a measure is useful, i.e. that it is connected to a goal. We can also validate empirically a direct measure by asking theparticipants of an empirical study whether a mapping to avalue captures their understanding of the attribute. This kindof validation is based on interviews with the people supplyingthe data. It is an interactive data validation process [2].Shneidewind [26] defines a measures validation process thatintegrates quality factors, metrics, and quality functions. Thevalidation process includes six validity criteria that supportquality aspects like assessment, control, and prediction.Another approach to validate software product measures isthe GQM/MEDEA (MEtric DEfinition Approach) [8]. Itcombines the GQM, the approach in [6] and [7], and someguidelines to perform experiments in [32].Several attempts to define properties for sound measures arepresent in the literature (e.g. [6], [12], [17], [31]). Weyukers [31]identified nine properties which are useful to evaluate syntac-tic software complexity measures. Her work has raised anintense discussion about validation ([12], [30], [7], [18]). We can conclude that there is not yet a general accepted wayof validating software measures. As stated in [8], the field ofmeasures validation is still in a state where terminology anddefinitions have to become more consolidated.

4. Theoretical Validation of Requirements Management Measures In this section we describe the measures we will use in thecase study (see Table 2). We apply the “key stages of formalmeasurement” [10] and the theoretical validation [12] shownin Figure 2. By following the first five properties we construct an empiri-cal system and a mathematical system. After that we define amapping from these two systems. A mapping is a functionfrom the empirical world to the mathematical world, there-fore it has a domain and a range. The real world is the domainof the mapping and depends on the entity we are measuring.It is usually made by the set of different instances of the enti-ties. The mathematical world is the range of the mapping. Todetermine the range of the mapping we have several options.We can choose non-mathematical symbols, natural numbers,integers, real numbers, etc. When defining a mapping we have

to choose the rules of the mapping (the way we measure).These rules are context dependent, they depend on the envi-ronment where the data is collected. For each measure inTable 2 we will describe the mapping rules used in the casestudy.By analysing the theoretical validation (shown in Figure 2),the attributes of an entity can have different measures associ-ated to them and each measure can be connected to differentunits. Suppose we want to measure the attribute “size” of theentity “room”. The length or the area can be two measures ofthe size of the room. The length can be measured with theunits centimetre or feet or the number of people that can fitin the room side by side. According to property 8, attributesare independent from the unit we choose to measure them,and any definition of an attribute that implies a particularmeasurement scale is invalid [12]. The attribute determinesthe scale type. In the traditional measurement theory onlyinterval and ratio scale measures can have different units [12].In order to choose the appropriate statistical method in theempirical study, we need to associate the scale type we will useto each measure. According to [6], knowing the scale type of ameasure with absolute certainty is unrealistic in the majority

Key stages of formal measurement[10]

Theoretical validation [12]

1. Identify attributes for some real world entities.

2. Identify empirical relations for the attribute.

3. Identify numerical relations corre-sponding to each empirical rela-tions.

4. Define mapping from real world entities to numbers.

5. Check that numerical relations preserve and are preserved by empirical relations.

6. For an attribute to be measurable, it must allow different entities to be distinguished from one another.

7. A valid measure must obey the representation condition.

8. Each unit of an attribute contrib-uting to a valid measure is equiva-lent.

9. Different entities can have the same attribute value.

Figure 2: Measures Validation Properties.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 5

Page 8: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

of the cases. Our criticism on the “key stages of formal measurement” isthat in some cases it assumes that the values of the measuresare numerical while our measures can assume non numericalvalues (like requirements status, type of requirements, etc.). Inthose cases, the mathematical world will be made by non-numerical symbols and we will be able to do only a restrictednumber of operations. The properties 1, 2, and 3 are shown inTable 2. These properties correspond to the definition ofattributes, empirical, and numerical relations. We need also todetermine the domain, range, and scale of each measure toverify property 4. The properties 5 and 7 are equivalent (therepresentation condition) and will be applied only once whenpossible. The properties 6, and 9 are intuitively satisfied forour ten measures. Property 8 will be applied on few measuresbecause the other measures are similar to those validatedthrough property 8. The measures in Table 2 are direct meas-ures. By applying the first property of Figure 2 we identify realworld entities and the properties of the entities we would liketo investigate. Our main goal is to measure the requirements

management. However, the management of requirementsincludes several activities and to define empirical and numeri-cal relations we need to define more refined entities. Theseentities can be organised hierarchically as shown in Figure 3.On the top of this hierarchy we have the Requirements Man-agement which is a process entity. As we go down in the hier-archy, we find more refined entities, which are the onesmeasured in practice. By applying the GQM and defininggoals and subgoals we would construct a similar hierarchicalstructure.The structure shown in Figure 3 is not a complete tree, moreleaves can be defined such as RM phases, RM tools, etc. Theentities shown in Figure 3 are the ones more closely con-

nected to the goals of the Requirements Management KPA ofthe SW-CMM. Among them are “atomic requirement”,“requirements specifications” and “changes to requirements”. There exist some definitions of atomic requirement (see forexample [25]). Most references try to make atomic require-ments context free i.e. stand alone. Different people may havedifferent perceptions as to how much to split requirements toget an atomic indivisible requirement. We are aware of therisks of splitting a requirement, the context of the require-ment could be lost, and the specifications could becomeunreadable. However, for our purposes, the “atomicistic”view of requirements eases our discussion. We assume that anatomic requirement is a (numbered) paragraph containing theword “shall” or “must”. During the software development process, each requirementgoes through different states as shown in Figure 4. The proc-

ess shown in the Figure is similar to a change requests process(shown in the user manual of the ClearDDTS tool). Some ofthe states in the Figure (subcontracted, reused, and cancelled)have been added after the execution of the case study becausewe understood the importance of these states during the exe-cution of the study. Among these states there is a partialorder. We can count the requirements in each of the phases toget a global view of the status of the SRS.The entities we have mentioned above, can have severalattributes associated to them. The ones we would like tomeasure are stability, traceability, consistency etc. These

Requirements Management

Requirements Documents

FunctionSpecifications

Requirements Specifications

Change Requests

Management

Changes to Requirements

Atomic Requirements

Figure 3: Hierarchical structure among the entities.

New

Analysed

Cancelled

Tested

Postponed Approved

Planned

Implemented

Delivered

Subcontracted

Reused

implementation ortesting have failed

Figure 4: A simplified requirements life cycle

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 6

Page 9: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

attributes are relevant to determine whether the general goalsof the Requirements Management KPA are reached [19].These are external attributes and the measures defined forthese attributes can only be validated empirically.

4.1 Size of Software Requirements Specification

One of the entities in Figure 3 is software requirements speci-fication (SRS). A SRS can be a document or a database thatcontain requirements. There are different kinds of SRS docu-ments, depending on the degree of formality. The informalSRSs are usually used to communicate with the customer,while the formal SRSs are used to communicate with thedevelopers. As pointed out in [13], there is no standard namefor requirements documents. For each requirement in therequirements document we suggest to include additionalinformation like the state of the requirement, the rationale,the source (the person that has requested the requirement),the priority, etc. because this information can help in doingestimations of cost or stability of requirements.By applying the first property of the “key stages of formalmeasurement” (see Figure 2) to the entity SRS we can identifyinternal attributes, for instance size. The reason for choosingthis attribute is because we think that there is a cause-effectrelationship between the attributes size and stability of therequirements: if the size of a requirements specificationchanges irregularly in time, then the requirements might beunstable. We intend to prove this hypothesis in future worksthrough an empirical study. The validity of the measures con-nected to this external attribute can only be established byperforming an external validation. The attribute size of SRScan be measured by counting the requirements. This countcan be done at different moments of the software develop-ment process, i.e. initial, current, and final moment. The scientific community is not yet in agreement on how todeal with units and scales of a software measure. The attributesize does not imply directly a measurement scale. However, ifwe identify empirical relations for this attribute, we findbinary relations like bigger, smaller, equal to, etc. These rela-tions imply an ordering of the entities being measured. Wecould identify empirical relation (such as equal to or different)which do not imply an ordering of the entities, but we wouldloose some properties of the attribute size. Property 8 in Fig-ure 2 says that it is not important which unit we choose if themeasure is valid i.e. if the representation condition is satisfied.According to [5, 6] instead interesting properties of the entitywould be lost if we choose a measurement unit belonging to alower scale. If we choose a unit belonging to the nominalscale (such as “empty requirements”, “non empty require-ments”) we can loose some properties of the attribute size.

The scale we choose for the attribute size is a ratio scalebecause there are different ways to measure this attribute. Infact, the size of SRS can be measured by counting the #pages, # rows and similar measures.

4.1.1 Total Number of RequirementsThe total # requirements is a count of the functional andnon-functional requirements. This count is done disregardingthe status of each requirement. This measure can be used inconjunction with the # initial, current, and final requirementsas well as the # changes per requirement to assess the level ofrequirements volatility. The mapping rule we followed in the case study was to countall atomic requirements that were in any of the states shownin Figure 4. The representation condition (properties 5 and 7)was verified during the case study, we compared the size oftwo SRSs and the total # requirements obtained by the twoSRSs. The empirical relations were preserved by the numeri-cal relations.

4.1.2 Number of Initial, Current, and Final RequirementsThe # initial, current, and final requirements measures areequivalent to the total # requirements. These measures differfrom each other in the time when the requirements arecounted and the reasons for collecting them. The # initial requirements is a count of requirements at a spe-cific point in time (for instance one week after the first draftof the SRS has been written). This includes all functional andnon-functional requirements originally provided by the cus-tomer.The # final requirements is a count of all the requirementsimplemented by the software product. This includes all func-tional and non-functional requirements upon which the finalsoftware product is produced. The # current requirements is a count of the requirementscollected in the current time point (day or week or similar, itdepends on how often the data is collected).By comparing the data we get from these three measures wecan see how much the “total # requirements” changes overtime and we can do predictions of how the size of SRS willchange over time. All these three measures can be used inconjunction with the number of changes per requirement toprovide meaningful analysis, such as requirements stability.The properties in Figure 2 are not applied because of similar-ity with the previous measure. The only difference betweenthese measures and the total # requirements is in property 4.In the case of # initial, current, and final requirements themapping rule must contain a time reference (i.e. when therequirements are collected).

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 7

Page 10: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

4.2 Status of Requirements Specifications

A requirements document (or SRS) can be in many states: itcan be a draft where the requirements are only sketched or abaselined document if the requirements are validated. In gen-eral the number of states of a SRS depends on the size of thecompany. The bigger the company, the more probable is thatthe SRSs have several different states. To simplify the discus-sion here we assume that a SRS can have three differentstates: initial, draft, and final. To measure the state of a SRSwe can check the states of each requirement and count thenumber of requirements in each state. We get an overview ofthe state of the SRS. This measure (status of SRS) is an indi-rect measure. Indirect measures are validated by following adifferent set of properties [12].

4.2.1 Status of Requirements The status of a requirement refers to whether a single require-ment has certain properties or not. An example of the possi-ble requirements states are listed in Figure 4. The differentstates correspond to the phases of a requirements life cycle.As a requirement moves through the RM process, its statusmoves through the states listed in Figure 4. The requirementslife cycle can therefore be tracked from the original proposalthrough to its delivery to the customer. This measure can becharted to provide details into where requirements are at acertain point in time. This can give an indication of how toallocate resources when there are still requirements in a draftstage respect to others which are in the development phase.Among the states there is a partial ordering, implying that wecannot define many empirical relations, only the binary rela-tions “equal” and “different” can be defined. The rule of mapping we used during the case study was tocheck the status of the requirements that have not been sub-contracted or reused1.To verify the representation condition, we compared thestates of two requirements empirically and numerically. Theempirical properties were preserved. However the empiricalstates of the requirements are dependent by the representa-tion of the states. It is very hard to compare requirements dis-regarding the representation of the states.Regarding property 8, there is only one unit for this measuretherefore the property is satisfied.

4.3 Changes to Requirements

Another entity relevant to our discussion is changes torequirements. A change to a requirement is any modification

done to the semantic of the requirement. A change may be requested either to correct a defect, to add anew feature to the system, to delete and update a feature.Each change to a requirement can be tracked.Internal attributes for the entity changes to requirements canbe size of change, rationale for change, phase of change. Achange to a requirement can be considered as a new require-ment and therefore can be in any of the states in Figure 4.This makes the measures “status of requirement” and “statusof change to requirements” similar to each other. The meas-ures status of change, reason of change, type of change aresimilar to status of requirements which has been validatedpreviously. Therefore we will apply only property 4 to them todefine the rules of mapping.

4.3.1 Number of Changes per RequirementThe # changes made can be counted, and this count can besummed for each requirement. This measure can be used tohelp determine requirements stability as well as to measurethe impact of changes on the software process, on the budget,and on the schedule of the project. As a requirement isreviewed, all changes are recorded. This measure can be usedin conjunction with other measures to chart general trendswithin the requirements management process.The mapping rule we used during the case study was to countall changes to requirements requiring at least 15 minutes.To verify the representation condition we had to prove thatgiven two requirements A and B, if the size of change of A isbigger than the size of change of B then the # changes of A ismajor than the # changes of B. However we had problems toverify the representation condition during the case study,because 1) the changes to requirements were counted but notdocumented; 2) it is difficult to compare the size of changebecause the concept of “change to requirements” is not for-malised.To verify property 8 we could measure the size of change torequirements by counting the # changed words or the #changed sentences to requirements. All these units are equiva-lent if the measure is valid.

4.3.2 Status of Change to RequirementsThe status of a change to requirements refers to whether achange to a requirement is in a certain status, the ones listedin Figure 4. We can track the changes to requirements fromoriginal proposal through to the test of the change. Thismeasure can be charted to provide details into where changesto requirements are at a certain point in time. This can give anindication of how stable the requirements are.The mapping rule used during the study was to not consider a“deleted requirement” as a change to that requirement.

1. A “reused” or “subcontracted” requirement can be in one of the states of Figure 4 but we did not consider those requirements in our study, because they were not developed by the participants.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 8

Page 11: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

4.3.3 Type of Change to RequirementsThis measure identifies the types of changes which are com-mon in the process. It provides insight into why requirementschange. The types of change that are made to requirementsinclude: change in delivery date, change in functionality, andother. The changes in functionality can be: correction (some-thing was wrong), completion (something was missing),improvement (the requirement can be rewritten in a betterway), and adaptation (caused for instance by new laws or newtechnology). These types correspond to the classifications ofmaintenance types. As a result, the number of types ofchanges can be charted to show general trends of types ofchanges. These types constitutes the range of the measure.The rule of the mapping we used during our study was toclassify only those changes to requirements that affected thesemantic (or content) of the requirements.

4.3.4 Reason for Change to RequirementsThe reason for change identifies the types of changes whichare common in the process. This measure provides insightinto why requirements change. We cannot prescribe a fix setof reasons for change. The ones we used during the casestudy are: new government/organisation regulations, misun-derstanding in original analysis, ambiguous specification,incomplete specification, wrong specification, new require-ment, and other (this is also the range of the measure). Thesereasons can be charted to give an indication of why require-ments are changed. The rules of the mapping are the same as for the measuretype of change to requirements.

4.3.5 Cost of Change to RequirementsThe cost of change is an indirect measure usually expressed asa function of variables like size of product, time, etc.The purpose of this measure is to give information about theactual cost spent on changing requirements and compare thisinformation to the estimated cost. This measure can be usedto assess the overall impact of requirements change on thesoftware project. We can measure the cost in terms of timeand people. The time spent on changing requirements and thenumber of people are direct measures and therefore can bevalidated through the properties in Figure 2. The internalattributes associated to requirements change are duration andresources necessary to change the requirement.When measuring time, we had several options during the casestudy about what rule of mapping to choose. We could meas-ure only the time necessary to implement the change.Another alternative was to include also the time necessary tomake discussions and decisions about the type of change tobe implemented. We decided to include all the time necessaryfor discussions and decisions about the change and the time

to implement the change. To verify the representation condition we had to prove thatgiven two changes to requirements A and B, if the durationnecessary to perform A is bigger than B then the time to per-form A is major than the time for B. We verified empiricallythis property and it was satisfied.We could measure the entity change to requirements bycounting the # changed words or the # changed sentences torequirements. All these units are equivalent because the meas-ure is valid.We had several options regarding the mapping rules. To con-sider only people who perform the change or to consider alsothe people who discuss about the change and/or validate thechange. We decided to count all the people discussing andimplementing the change. To verify the representation condition we have to prove thatgiven two changes to requirements A and B, if the amount ofresources necessary to perform A is bigger than B then the #people to perform A is major than the # people of B. Asbefore, we verified empirically this property and it was satis-fied.Regarding property 8, there is only one unit for this measure(only one way of counting people).

5. Case study in a SE Course

5.1 Overview and Motivation of the Study

We here depict a case study to demonstrate that the measuresdescribed previously are useful to quantify the amount ofchanges to requirements and to predict the cost of a change.This study was performed with the purpose of demonstratingthat the intuitive cost estimations are worse than the estima-tions based on historical data. We followed the experimentprocess shown in Figure 5 [32]. However, the empirical workdescribed here is a case study, because there were variables inthe study that we could not control like the developers experi-ence.

5.2 Informal Description of the Study

The study was conducted in the course Object-Oriented Soft-ware Development (OOPU) held at Umeå University. Therewere about twenty students in this course divided in fourteams. Two teams of students collected the data during thedevelopment of their projects. The students wrote their func-tional requirements following a specific format [see appendix10.1]. They measured their requirements every week and filledin forms with the results of the measurement activities ontheir projects. Data collection was based on use cases, sincestudents followed a use case based approach.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 9

Page 12: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Tab

le 2

: S

amp

le o

f R

equ

irem

ents

Man

agem

ent

Mea

sure

s [1

4]

En

tity

Inte

rnal

at

trib

ute

Ext

ern

al

Att

rib

ute

Mea

sure

Do

mai

nR

ang

eS

cale

Em

pir

ical

Rel

atio

nN

umer

ical

R

elat

ions

SR

Sre

quire

men

t sp

ecifi

catio

n si

ze,

stab

ility

, cha

nge

impa

ct

Tota

l # R

equi

re-

men

ts

SR

SN

atur

al

Num

bers

ratio

bigg

er, s

mal

ler,

equa

l to,

add

ition

, su

btra

ctio

n et

c.

>, <

, =, +

, -,

etc.

SR

Sre

quire

men

t sp

ecifi

catio

n si

ze

trac

eabi

lity,

con

-si

sten

cy, s

tabi

lity,

vo

latil

ity, c

hang

e im

pact

# In

itial

, cur

rent

, fin

al r

equi

re-

men

ts

SR

SN

atur

al

Num

bers

ratio

bigg

er, s

mal

ler,

equa

l to,

add

ition

, su

btra

ctio

n et

c.

>, <

, =, +

, -,

etc.

Req

uire

men

tssi

ze o

f ch

ange

s pe

r re

quire

men

t

stab

ility

, vol

atili

ty,

chan

ge im

pact

# C

hang

es p

er

requ

irem

ent

requ

ire-

men

tsN

atur

al

Num

bers

ratio

bigg

er, s

mal

ler,

equa

l to,

add

ition

, su

btra

ctio

n et

c.

>, <

, =, +

, -,

etc.

Req

uire

men

tsS

tatu

s of

R

equi

rem

ents

sp

ecifi

catio

ns

stab

ility

Sta

tus

of e

ach

requ

irem

ent

requ

ire-

men

ts(s

ee F

igur

e 4)

nom

i-na

leq

ual a

nd d

iffer

ent

= <

>

Cha

nge

to

requ

irem

ent

prog

ress

of

chan

ge

requ

ests

in li

fe

cycl

e

Sta

bilit

yS

tatu

s of

cha

nge

to r

equi

rem

ents

chan

ges

to

requ

ire-

men

ts

(see

Fig

ure

4)or

dina

leq

ual a

nd d

iffer

ent

= <

>

Cha

nge

to

requ

irem

ent

chan

ge c

lass

i-fic

atio

ntr

acea

bilit

y, c

on-

sist

ency

, sta

bilit

y,

chan

ge im

pact

Type

of c

hang

e to

req

uire

men

tsch

ange

s to

re

quire

-m

ents

(see

par

a-gr

aph

4.3)

nom

i-na

leq

ual a

nd d

iffer

ent

= <

>

Cha

nge

to

requ

irem

ent

ratio

nale

for

chan

ges

stab

ility

, cha

nge

impa

ct

Rea

son

of

chan

ge to

re

quire

men

ts

chan

ges

to

requ

ire-

men

ts

(see

par

a-gr

aph

4.3)

nom

i-na

leq

ual a

nd d

iffer

ent

= <

>

Cha

nge

to

requ

irem

ent

Dur

atio

nC

ost,

effo

rt, o

f ch

ange

to r

equi

re-

men

ts

Tim

ech

ange

s to

re

quire

-m

ents

hour

sIn

terv

albi

gger

, sm

alle

r, eq

ual t

o, a

dditi

on,

subt

ract

ion

etc.

>, <

, =, +

, -,

etc.

Cha

nge

to

requ

irem

ent

Res

ourc

eco

st, c

hang

e im

pact

Peo

ple

chan

ges

to

requ

ire-

men

ts

natu

ral

num

ber

Abs

o-lu

tebi

gger

, sm

alle

r, eq

ual t

o, a

dditi

on,

subt

ract

ion

etc.

>, <

, =, +

, -,

etc.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 10

Page 13: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

The projects were run during a period of 20 weeks whichwere divided in two parts (iterations). The software developedin these projects was delivered to the customers. There weretwo customers, the teacher of the course OOPU and anothercustomer external to the Department of Computing Sciencewhere the study was performed. Each team received a form with a list of measures to be col-lected. There were two different forms, the forms differed inthe amount of data to be collected. The reason for this differ-ence is because we thought that the cost of change of arequirement depends on the type of requirements and on thetype of change to requirements. The data collection was per-formed differently by the two teams. One team (Team B)made intuitive predictions of cost of changes to requirementsduring the first and second iterations. The plan for the otherteam (Team A) was to make intuitive predictions in the first

iteration. In the second iteration their prediction should havebeen based on historical data. However Team A did not havechanging requirements in the second iteration therefore theywere not able to make predictions at all.

5.3 Risks and Limitations of the Study

By performing this case study, three major risks can becomenegative events:• Classroom projects are usually stable and well defined.There was therefore the risk that the projects under study didnot have changing requirements and historical data. To mini-mise this risk, we met the customers be sure that this wouldnot happen. • The subjects could not be engaged enough to collect data.The consequence of this would be a lack of data from the par-ticipants. To minimise this risk we decided to offer an induce-ment to the participants of the study. • The participants could become aware of the importance ofmanaging requirements because they know that they are par-ticipating to a case study. The students might work harder toget good effort estimations. This problem is known as theHawthorn (or observer) effect [16]. When the project person-nel become aware that their work is being monitored, theirbehaviour will change.Other small risks were the possibilities that the subjects use aformal model for cost predictions like COCOMO, affectingthe results of the case study. There was also the possibilitythat the participants use requirements management tools orconsult a requirements management expert. The studentscould have misunderstood the terminology used in the forms.This risk was low because we kept email contact with the stu-dents for all the duration of the study. The personnel experi-ence was not investigated before the start of the study and thegroups constellation was decided by the participants. Thiscould have caused the groups to not have a uniform person-nel experience in software development (the experience couldhave been unevenly distributed). Therefore the results of thestudy could have been affected by the groups experience.These risks were minimal and were managed by interviewingthe students and navigating in their course site.As mentioned in [2] there are other general risks common toall empirical studies. For instance the danger to interpret theresults without attempting to understand factors in the envi-ronment that might affect data. Underestimating theresources needed to validate and analyse the data, to associatemeasures with wrong scale type and consequently analysedata with the wrong statistical test [7]. During the validationprocess, we might not be able to say how many data will neverbe reported. As before, these risks were minimal.

Definition

PlanningContext selectionHypothesis formulationVariables selectionSelection of subjectsExperiment designInstrumentationValidity evaluation

Analysis and interpretationDescriptive statisticData set reductionHypothesis testing

OperationPreparationExecution Data validation

Presentation and package

Experiment definition

Experiment design

Experiment data

Conclusions

Experiment

Experiment report

Figure 5: Experiment Process [32]

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 11

Page 14: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

5.4 Definition of the Case study

The definition of the case study determines why the case studyis conducted. By applying the GQM template [1] for the goalsdefinition we obtained the following: Analyse: Requirements Management process area in theObjekt Orienterad Program Utvekling (OOPU) course.With the Purpose of: Evaluate the impact of software meas-urement in prediction of cost of changes to requirements.Quality focus: Effectiveness of the use of historical data inpredictions of cost of changes to requirements compared tointuitive predictions.Perspective: Academy.Context: The study is conducted in an object-oriented soft-ware development course. The context of the study will bedescribed below in more details.

5.5 Planning of the Case Study

The planning of a case study determines how the study is con-ducted. The design of the case study is decided in this phase.We could identify a causal relationship in our case study idea.The use of historical data for predictions of cost of changesto requirements has the effect of increased precision in costpredictions.For accuracy reasons, our plan has been to proceed concur-rently between data collection and data validation. The valida-tion consisted in checking the forms for correctness,consistency, completeness, omissions, and miscategorisation.In case where the checks revealed problems, participants wereinterviewed. As suggested by [2], such validity checks resultedin corrections to the data that could not be made in a laterstage due to the nature of human memory. The context of the study was a course (OOPU) focused onobject-oriented approaches. Methods, languages, and toolsthat support these approaches were discussed and applied.The work was documented step by step in workbooks andproject reports. The course was project oriented, the projectswere small-medium size (the total effort consisted in tenthousands LOC developed by 12 persons in a month). Theprojects under study were run in a off-line environment (nonindustrial software development) (see [9] for details about thecourse and the development process used in the course).The subjects of the case study were students of the 3rd to 5thyear of a Computing Science education program held atUmeå University, Sweden. The subjects were attending theOOPU course described above. Some of the students hadprevious background knowledge in software developmentacquired outside the university. Therefore, the participantswere semi-professional and non professional developers. Thestudents were organised into four teams of about six mem-

bers. Each team worked through all phases of the softwaredevelopment process, from project proposal to the imple-mentation and delivery of a working prototype. According toour plan, all the four groups should have participated to thestudy. However only two groups collected data. The developed applications were “Inredaren” and “UmUpor-tal”. The first one is a floor planning system that provides theuser with a window-based user interface. It has two mainfunctions, floor drawing and floor furnishing. UmUportal is apersonal portal for Umeå University. The students used Rational Rose to support analysis anddesign. The cost of a change was calculated through all theactivities and affected by the tool.The study was specific since it was focused on managingrequirements. It addressed a real problem i.e. the difference inpredicting cost of changes to requirements with or withouthistorical data. The objects under study were Software requirements specifi-cations (SRS), and the software process used. The null hypothesis H0, the one to be rejected, was the fol-lowing: the cost predictions made by using historical data areat most as good as the intuitive predictions. The alternativehypothesis H1 was the following: the cost predictions madeby using historical data are better than intuitive predictions.One independent variable was the personnel experience. Theproject was another independent variable, with values “Inre-daren” and “UmUportal”. There was one factor, cost predic-tions and two treatments (the values of the factor): intuitivecost predictions, and controlled cost predictions. Thedependent variable was “precision of cost predictions”. All the students participating to the OOPU course wereselected. The students were divided into 4 groups. Eachgroup had at least one team manager and one requirementsengineer. Each student in the group could assume differentroles during the project. The design type chosen was one factor with two treatments.1

The design principle followed was balancing i.e. to assign thetreatments so that each treatment had equal number of sub-jects. In our case we planned that two groups of studentsshould work on the development of a student portal and theother two groups should develop a Floor Planning System.The forms were assigned such that each project could be eval-uated by two different forms. Two teams should have per-formed intuitive cost predictions, two teams of studentsshould have performed cost prediction using historical data.The kind of statistic chosen for our measures: # initial, cur-rent, final requirements, # changes to requirements, time, andfor # people, was a parametric statistic because as suggested

1. The design type chosen is 2 factors-2 treatments if we consider the project to be another variable of the case study.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 12

Page 15: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

by [22] these measures reach at least the interval level.Non parametric statistic tests make less stringent assump-tions. For the measures type of change to requirement, reasonof change to requirement, status of requirement and status ofchange to requirements we choose non parametric testsbecause these measures reach the nominal-ordinal level. The instruments are usually of three types: objects, guidelines,and measurement instruments. The only instrument usedduring the study was e-mail forms in plain text. The reasonfor choosing plain text forms was because of simplicity andflexibility in the answers. Other ways of collecting data havebeen considered like excel-forms and html-forms available online but these were discarded because they are more complexand less flexible. Team A (the controlled group) used a formcontaining a detailed list of measures to collect and predictedthe cost of change to requirements based on the data col-lected and historical data. Team B used a form containing asimplified list of measures and made intuitive estimation ofcost of changes to requirements. The forms are shown in theappendix 10.1 and 10.2. The forms have been informallyreviewed.

5.6 Case Study Operation

The preparation of the subjects was done during a lecturewhere we explained the background of the research, the casestudy goals, and we showed the forms to be filled in.During the execution of the case study, the main concern wasto ensure that the study was conducted according to theplans. The case study was executed during a period of fourmonths (from September 2002 to January 2003). The costswere limited to a small inducement.The measures used during the study are the ones shown inTable 2. We measured requirements with a use cases granular-ity.

5.7 Analysis and Interpretation

The study did not succeed as expected for two main reasons.First: among the four groups performing a project in theOOPU course, only two teams participated to the case study.Even an inducement did not convince the other two teams tocollect data. Secondly: the projects did not have many chang-ing requirements. The original goal was that the controlledgroup should make cost estimation in the second iteration byusing historical data. We expected meetings between the cus-tomer and the developers to validate the requirements, and asa consequence, a series of requests of change from the cus-tomer. Sadly, the controlled group had little contact with thecustomer to validate their requirements and this affected theresults of the study.

During the data collection process there was e-mail contactwith the students about how to collect time and people meas-ures and which “changes to use cases” were relevant to thestudy. We decided to not collect data related to changes to usecases whose time to perform the change was less than fifteenminutes. There has also been some misunderstanding in howthe data should be collected. In fact the changes to use caseswere collected in two different ways. Another discussion withthe students was related to the use cases states. Some of theuse cases were “reused”, others were “subcontracted”. We didnot foresee these states, therefore they were not present in theforms provided to the students.The descriptive statistics help us to understand and interpretdata informally. This is the first step to be applied after col-lecting the data. Descriptive statistic may be used to describeand graphically present interesting aspects of a data set [32]. By comparing data collected by the two teams (Figure A.1,Figure A.2, Figure A.6 and Figure A.7), we can observe thatthe graphs are similar. Figure A.1 describes the measure # usecases per week for both teams, connected to the stability ofrequirements. Team A has a period during the first five weeksof the project when the requirements are changed. After-wards, Team A’s requirements are very stable. The same situa-tion can be observed in Figure A.6 (Team A use cases perweek). Team A has some changes to requirements only duringweek 41. During the second iteration, the # changes is equalto zero. The same situation can be observed in Figure A.7which describes the measure Team B # use cases per week.Observing Team B use cases’s in Figure A.1, we can noticevery well the two iterations in which the project is divided.The requirements are changing during the first two weeks ofthe first iteration and between the two iterations. The samebehaviour can be observed in Figure A.2. The # changes perrequirements per Team B is higher during the beginning ofthe first iteration and it decreases to zero. Between the firstand second iterations there are other changes. Similar situa-tion can be observed in Figure A.7 which describes the meas-ure Team B # use cases per week (The mode is equal to zerofor both groups). In Figure A.3, A.4, and A.5 we can observe the differencebetween the actual and expected cost (time, people, effort) forTeam B. There is no real difference in estimations betweenthe first and second iteration. The estimated values are quitesimilar to the actual values. Finally, in Figure A.8 and A.9 we can observe the status of theuse cases for Team A and Team B. Among the status pro-posed, Team A introduced two states “deleted” and “reused“,while they never used the states “reviewed”, and “tested“.Similarly, Team B used two non suggested states, “deleted”(same as for Team A) and “subcontracted“.

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 13

Page 16: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

6. Discussion and ConclusionIn the previous sections we have validated requirements man-agement process measures (shown in Table 2). After that wehave used these measures in a case study performed in a uni-versity environment. The goal of the study was to demon-strate that estimations of cost predictions done by means ofhistorical data are better than intuitive predictions. We encountered some difficulties in performing the valida-tion of the measures. For example it was sometimes difficultto find internal attributes and to distinguish between internalattributes and direct measures (as pointed out by [32]).It was difficult to verify the representation condition for sta-tus measures and for other measures. In the case of changesto requirements, it was difficult to compare the size of changemade to two requirements because we did not have a formaldefinition for size of change. Furthermore, the list ofattributes and empirical relations for the attributes might notbe a complete list.As we have pointed out earlier, the study did not succeed forseveral reasons. However we have learned some lessonswhich are listed below.• When doing measurement it is very important to defineentities carefully. For instance in our case it was difficult todecide what a requirement is, what granularity should arequirement have when we measure it, what format shouldthe requirement have. All these issues must be taken intoaccount when we measure, otherwise it is difficult to comparethe results of the measurement activities.• An inducement is not enough to engage students to partici-pate in a case study. More effort should be spent in commit-ting the participants and obtaining their consensus. • More effort should be spent in pushing students and cus-tomers to discuss about their requirements. The subjectsbelonging to the controlled group and their customer had fewdiscussions about requirements. We expected validation ofthe requirements from the students side and requests ofchange from the customer side. • No restrictions should be put in data collection especially instudent projects which are more stable than real on-lineprojects. The decision not to collect data about small modifi-cation of use cases was probably a mistake because by doingso the number of changes decreased considerably. Howeverthe time spent to report the data should not overcome thetime spent in implementing the change. The best would be touse an automatic data collection tool so that the time to regis-ter data does not overcome the actual time of work.• With regards to the measures definition, new values for the“status of requirements” measure were introduced (cancelled,reused, and subcontracted). These new values made the meas-urement scale of the status of requirements to become nomi-nal instead of being ordinal. This can affect the kind ofstatistic we choose. For other nominal and ordinal measuresnew values were introduced and some values were not used.

• The measure # changes per requirements was collected indifferent ways by the two groups. The reasons for all thesemisunderstandings can be a non-strict definition of the map-ping rules and a consequent confusion of the participants col-lecting the data. Although we were not able to complete our study success-fully, this work can be useful as an example of how to per-form a case study in a university environment and what risksshould be taken into account. Furthermore, a lot of researchis done to define properties of valid software measures but, toour knowledge, few show examples of application of theseproperties to the measures. Our work tries to fill this gap.

7. Future WorkAs future work we plan to improve the present research inseveral ways. As we have stated earlier, each measure should be part of aprediction system, i.e. for each measure we should demon-strate that the external attribute is a function of the internalone. Table 2 can be the starting point for creating predictionmodels. Some concepts should be studied more deeply. For instance,is it reasonable to speak about the size of requirements? Orare we considering the size of the software/project instead?Some definitions need to be formalised like change torequirement and size of change. The study of the entities andthe relationships among the entities is another topic that canbe investigated. We can also validate what is left from the 38measures defined in [14]. These measures need to be tailoredto the specific organisation. We plan to perform an empirical study in a medium-size com-pany in Sweden. We will analyse historical projects at thiscompany and based on this analysis we will design a suitablerequirements management process at this company. Afterthat we will compare the historical data with the actual dataobtained following the new requirements management proc-ess. Finally, we will create a baseline which can be used forfuture projects as a reference for comparisons. Another possible empirical study we would like to perform isbe to demonstrate that the size of software systems is propor-tional to the size of their requirements, in other words: if R1and R2 are two requirements and size(R1) < size(R2) the soft-ware systems S1 and S2 generated by these requirements aresuch that size(S1) < size(S2).

8. AcknowledgementThanks to the students of the OOPU course that have partic-ipated to the study for their help by collecting data. In partic-ular we would like to thank Magnus Andersson, MarcusBergner, Martin Englund, Markus Holmberg, Mattias Nordin,Fredrik Åslund, Jan Svensson, Olof Burman, Josef Israelsson,

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 14

Page 17: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Hans Olofsson, and Rickard Melkersson.

9. References[1] Basili, V.R., and Rombach, H.D., The TAME Project:

Towards Improvement-Oriented Software Environ-ments, IEEE Transactions on Software Engineering, 14(6),pp.758-773, 1988.

[2] Basili, V.R., and Weiss, A Methodology for CollectingValid Software Engineering Data, In IEEE Transactions onSoftware Engineering, 10(6), November 1984.

[3] Basili,V.R., Selby, R.W., and Hutchens D.H. Experimenta-tion in Software Engineering, In IEEE Transactions on Soft-ware Engineering, 12(7), July 1986.

[4] Baumert, J.H. and McWhinney, M.S., Software Measuresand the Capability Maturity Model, Software EngineeringInstitute Technical Report, CMU/SEI-92-TR-25, ESC-TR-92-0, 1992.

[5] Berry, M., and Vanderbroek, M.F., A targeted Assessmentof the Software Measurement Process, Proceeding of theseventh International Software Metrics Symposium, Lon-don, UK, April 2001.

[6] Briand, L.C., El Eman, K., and Morasca, S., Theoreticaland Empirical Validation of Software Product Measures,International Software Engineering Research NetworkTechnical Report, #ISERN-95-03, 1995.

[7] Briand, L.C., El Eman, K., and Morasca, S. On the appli-cation of measurement theory in software engineering,Journal of Empirical Software Engineering, vol. 1, no. 1, 1996.

[8] Briand, L.C., Morasca, S., and Basili, V.R., An OperationalProcess for Goal Driven Definition of Measures, IEEETransactions on Software Engineering, 28(12), December 2002.

[9] Börstler, J., Experiences with Work-Product OrientedSoftware Development Projects, Journal of Computer ScienceEducation 11(2), June 2001, 111-133.

[10] Fenton, N.E., and Pfleeger, S.L., Software Metrics - A Rigor-ous & Practical Approach, 2nd Edition, International Thom-son Publishing, Boston, MA, 1996.

[11] Hammer, T.F., Huffman, L.L. and Rosenberg, L.H.:D o i n g Re q u i r e m e n t s R i g h t t h e F i r s t T i m e ,&URVV7DON, 20-25, December 1998.

[12] Kitchenham, B., Pfleeger, S.L., and Fenton, N., Towards aFramework for Software Measurement Validation, InIEEE Transactions on Software Engineering, 21(12), Decem-ber 1995.

[13] Kotonya, G., and Sommerville, I., Requirements EngineeringProcesses and Techniques. John Wiley & Sons, first print 1998,reprint 2002.

[14] Loconsole, A., Measuring the Requirements ManagementKey Process Area , Proceedings of ESCOM - EuropeanSoftware Control and Metrics Conference, London, UK,April 2001.

[15] Loconsole, A., Non Empirical Validation of Require-ments Management Measures. ICSE Workshop on Soft-ware Quality, May 2002.

[16] Mayo, E. The human problems of an industrial civilization, NewYork: MacMillan, ch.3, 1933.

[17] Melton, A.C., Gustafson, D.A., Bieman, J.M., Baker, A.L.,A Mathematical Perspective for Software MeasuresResearch, Software Engineering Journal, September 1990.

[18] Morasca, S., Briand, L.C., Basili, V.R., Weyuker, E., andZelkowits M.V., Comment on Towards a Framework forSoftware Measurement Validation, IEEE Transactions onSoftware Engineering, 23( 3) March 1997.

[19] Paulk, M.C., Weber, C.V., Garcia, S., Chrissis, M.B., andBush, M., Key Practices of the Capability Maturity ModelVersion 1.1, Software Engineering Institute TechnicalReport, CMU/SEI-93-TR-25 ESC-TR-93-178, Pitts-burgh, PA, 15213-3890, USA, February 1993.

[20] Pflegeer, S.L.: Software Engineering Theory and Practice, Pren-tice Hall, Upper Saddle River, New Jersey, 1998.

[21] Pfleeger, S.L., Jeffery, R., Curtis, B., and Kitchenham, B.,Status Report on Software Measurement, IEEE Software,14(2), 1997.

[22] Raynus, J., "Software Process Improvement with CMM", ArtechHouse Publishers, Boston, 1999.

[23] Reifer, D.J., Requirements Management: The Search forNirvana, IEEE Software, May/June 2000, pp. 45-47.

[24] Rosenberg L.H., Hyatt L.: Developing an Effective Met-rics Program, European Space Agency Software Assur-ance Symposium, the Netherlands, March 1996.

[25] Salzer A., Atrs Writing Workshop Handbook, QWE’99,Brussels, Belgium, July 1999.

[26] Shneidewind, N.F., Methodology for Validating SoftwareMetrics, IEEE Transactions on Software Engineering, 18(5),May 1992.

[27] Sommerville, I., and Sawyer P., Requirements Engineering aGood Practice Guide. Wiley June 2000.

[28] The Standish Group. The CHAOS Report. Dennis, MA:The Standish Group, 1994.

[29] van Solingen, R., and Berghout, E., Integrating Goal-Ori-ented Measurement in Industrial Software Engineering:Industrial Experiences with and Additions to the Goal/Question/Metrics Method (GQM), Proceeding of theseventh International Software Metrics Symposium, Lon-don, UK, April 2001.

[30] Zuse, H., A Framework of Software Measurement, Walter deGruyter, 1998.

[31] Weyuker, E., Evaluating Software Complexity Measures,In IEEE Transactions on Software Engineering, 14(9):1357-1365, 1988.

[32] Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Reg-nell, B. and Wesslen, A., Experimentation in Software Engi-

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 15

Page 18: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

neering An Introduction, Kluwer Academic Publishers,Boston/Dordrecht/London, 2000.

10. Appendix

10.1 Form 1

The following form was distributed by email to the controlledgroup named Team A.• Project group name:________• Week:______• Number of current use cases (right now):__________Please identify the use cases with a number and for each usecase write:• Use case number:__________• Use case status:____________where status can be: started, documented, reviewed, imple-mented, tested, delivered, other (specify).• Number of changes to use case:_______Please identify the changes with a number and for eachchange to use cases write:• Change number:_______________• Status of change:______________ where status can be: proposed, approved, rejected, designed,implemented, tested, other (specify).• Type of change:_______________where type of change can be: change in delivery date, changein functionality: correction (something was wrong), comple-tion (something was missing), improvement (the use case canbe rewritten in a better way), adaptation (caused for instanceby new laws or new technology), other (specify).• Reason of change:______________ where reason can be: new government/organisation regula-tions, misunderstanding in original analysis, ambiguous speci-fication, incomplete specification, wrong specification, new(customer) requirement, other (specify).

Estimated cost of change, where estimated cost of change is• total time the team plan to work on the change in hour:__• number of people who will work on the change:_______

Actual cost of change, where actual cost of change is• total time the team has spent on working on the change in

hour:__________• number of people who have worked on the change:_____Other comments:____________

10.2 Form 2

The following form was distributed by email to the controlledgroup named Team B.

• Project group name:________

• Week:______

• Number of current use cases (right now): ___________Please identify the use cases with a number and for each usecase write:

• Use case number:__________

• Use case status:____________ where status can be: started, documented, reviewed, imple-mented, tested, delivered, other (specify).

• Number of changes to use case:_______Please identify the changes with a number and for eachchange to use cases write:

• Change number:_______________

Estimated cost of change, where estimated cost of change is

• total time the team plan to work on the change in hour:__

• number of people who will work on the change:_______

Actual cost of change, where actual cost of change is

• total time the team has spent on working on the change in hour:__________

• number of people who have worked on the change:_____Other comments:____________

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 16

Page 19: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

10.3 Use Case Template

During the case study, the use cases were written by the par-ticipants according to the template which follows: .

10.4 Descriptive Statistic

Table 3: Use Case Template

Fields Description

Use Case Memorable name (active verb-goal phrase) and unique ID

Description Short description of the goal to be achieved

Actors List of actors involved in the use case

Preconditions/assumptions

Conditions that must be satisfied to trig-ger the use case

Main success sce-nario

Sequence of interactions between the actor(s) and the system

Post condition Conditions that are satisfied after suc-cessful completion

Non-functional requirements

Non-functional requirements related to the use cases (may be described in detail elsewhere)

Open issues List of issues that have to be solved

History Modification history

Table 3: Use Case Template

Fields Description

Teams A and B use cases per week

0

5

10

15

20

25

30

38 38.5 39 40 41 42 43 44 45 45.5 46 47 48 49 50 51 52

First iteration Second iteration

Weeks

# u

se c

ases

Team B use cases

Team A use cases

Figure A.1: Team A and Team B number of use cases per week

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 17

Page 20: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Teams A and B changes per week

0

2

4

6

8

10

12

38 39 40 41 42 43 44 45 46 47 48 49

First iteration Second iteration

Weeks

# ch

ang

es

Team B total # changes

Team A total # changes

Figure A.2: Team A and Team B number of changes per week

Team B time spent on changing use cases

0

50

100

150

200

250

300

350

400

11 4 2 0 0 0 0 9 2 0

38 39 40 41 42 43 44 45 46 47 48

First iteration Seconditeration

Weeks

Min

ute

s

Estimated time

Actual time

#changes

Figure A.3: Team B time spent on changing use cases

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 18

Page 21: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Team B number of people changing use cases

0

5

10

15

20

25

30

35

0 11 4 2 0 0 0 0 9 2 0

38 39 40 41 42 43 44 45 46 47 48

First iteration Seconditeration

Weeks

# p

eop

le

Estimated people

Actual people

#changes

Figure A.4: Team B number of people who changed use cases

Team B estimated and actual effort

0

2

4

6

8

10

12

14

16

18

11 4 2 0 0 0 0 9 2 0

38 39 40 41 42 43 44 45 46 47 48

First iteration Seconditeration

# changes per week

Per

son

s h

ou

r

Estimated effort

Actual effort

#changes

week

Figure A.5: Team B effort in changing use cases (people/time)

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 19

Page 22: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Team A Use cases per week

0

2

4

6

8

10

12

14

38 39 40 41 42 43 44 45 46 47 48 49 50 51 52

First iteration Second iteration

Weeks

Use

cas

es

Changed UC

Stable UC

Deleted UC

Figure A.6: Team A number of use cases stable, changed, and deleted

Team B Use cases per week

0

5

10

15

20

25

30

35

38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53

First iteration Second iteration

Weeks

# U

se c

ases

Changed UC

Stable UC

Deleted UC

Figure A.7: Team B number of use cases changed, stable, and deleted

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 20

Page 23: Theoretical Validation and Case Study of … Validation and Case Study of Requirements Management Measures Annabella Loconsole and Jürgen Börstler UMINF-03.02 ISSN-0348-0542 UMEÅ

Loconsole A., and Börstler J.: Theoretical Validation and Case Study of Requirements Management Measures

Team A Use cases status

11

11

11

11

11

4

66

6

55

55

55

55

5

66

66

66

66

66

6 5

0 1 2 3 4 5 6 7 8 9 10 11 12 13

39

40

41

42

43

44

45

48

52

53

54

1

2

3

Firs

t ite

ratio

nS

econ

d ite

ratio

n

Wee

ks

# Use cases

Deleted Started

Documented ImplementedReused Delivered

Figure A.8: Team A use cases status

Team B Use cases status

33

33

33

34

44

44

44

44

44

4

1212

1212

1212

1919

1919

19

10,510,5

9,5

16,5

18,518,5

7

2

15

8,58,5

6,5

22

22

2

8,5

8,58,5

8,58,5

1313

1313

13

2727

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

1

2

3

Fir

st it

erat

ion

Sec

on

d it

erat

ion

Wee

ks

# Use cases

Deleted

Subcontracted

Reviewed

Documented

Started

Implemented

Tested

Delivered

Figure A.9: Team B use cases status

Technical Report UMINF 03.02, Department of Computing Science, Umeå University, August 2003 21