14

Lutess: a testing environment for synchronous software

Embed Size (px)

Citation preview

Lutess: a testing environment for synchronoussoftware?L. du Bousquety, F. Ouabdesselamy, I. Parissisz, J.-L. Richiery, N. Zuanonyy LSR-IMAG, BP 72, 38402 St-Martin-d'H�eres, Francefldubousq, ouabdess, richier, [email protected] France Telecom - CNET, 28 chemin du Vieux Chene, 38243 Meylan, [email protected] Several studies have shown that automated testing is a promis-ing approach to save signi�cant amounts of time and money in the in-dustry of reactive software. But automated testing requires a formalframework and adequate means to generate test data.In the context of synchronous reactive software, we have built such aframework and its associated tool -Lutess- to integrate various well-founded testing techniques. This tool automatically constructs test har-nesses for fully automated test data generation and verdict return.This paper describes the four black-box testing techniques which arecoordinated in Lutess uniform framework.1 IntroductionTesting receives an increasing attention from research teams working on formaltechniques for software speci�cation, development and veri�cation, for two rea-sons. First, testing appears to be the only means to perform the validation of apiece of software, when formal veri�cation is impracticable because of lacks ofmemory and/or time. Second, testing brings a practical solution to the speci�ca-tions themselves. It can help one get con�dence in the consistency and relevanceof the speci�cations. It can also reveal discrepancies between the speci�cationsand the speci�er's intentions.So, testing is more and more often used jointly with, and in complement to for-mal veri�cation [2]. Besides, to be a signi�cant support to validation, the testingtechniques must either provide a basis to reliability analysis [8], or be aimed atrevealing errors in the software application to improve its correctness.In this paper, we present Lutess, a tool for testing synchronous reactive soft-ware. Lutess provides a formal framework based on the use of the Lustre language[3]. It embodies several testing techniques: random testing with or without op-erational pro�les [10, 4], speci�cation-based testing [12] and behavioral patternoriented testing [11].? This work has been partially supported by a contract between CNET-France Telecomand University Joseph Fourier, #957B043.

Section 2 introduces the issue of testing reactive software and presents Lutessfrom the tester's viewpoint. Section 3 describes Lutess functional generationmethods, while section 4 presents their formal de�nitions and section 5 givessome aspects about their implementation. Section 6 shortly considers Lutessapplicability through its actual experimentations.2 Testing reactive systems2.1 Speci�c attributes of reactive systemsAn important feature of a reactive system is that it is developed under assump-tions about the possible environment behavior. For example, in a steam-boiler,the physical phenomenon evolves at a low speed. This makes impossible for thetemperature to raise, at once, from a low level to an emergency situation. Con-sider a program which monitors the temperature through three input signals(temperatureOK, BelowDesiredTemperature, AboveDesiredTemperature). Thisprogram will observe a gradual evolution of the parameters. Thus the input do-main of the reactive program is not equal to the Cartesian product of the inputvariable domains. Not only it is restricted to subsets of this product, but, in ad-dition, the environment properties rule out some sequences of successive valuesfor some input variables. The environment properties constrain the testdata which are to be generated.Besides, testing reactive systems can hardly be based on manually generateddata. The software input data depend on the software outputs produced atthe previous step of the software cyclic behavior. Such a process requires anautomatic and dynamic generation of input data sequences. An inputdata vector must be exhibited at every cycle.2.2 Lutess: operational principleThe operation of Lutess requires three elements: a random generator, a unit un-der test and an oracle (as shown in Figure 1). Lutess constructs automaticallythe test harness which links these three components, coordinates their execu-tions and records the sequences of input-output relations and the associatedoracle verdicts. The three components are just connected to one another andnot linked into a single executable code.The unit under test and the oracle are both synchronous and reactive pro-grams, with boolean inputs and outputs. Optionally, they can be supplied asLustre programs. The generator is automatically built by Lutess from Lustrespeci�cations. These speci�cations are grouped into a speci�c syntactic unit,called a testnode. This notion of testnode has been introduced as a slight exten-sion to Lustre [13].

Oracle

Unitunder testConstrained random generatorFigure1. LutessThe aforecited speci�cations correspond to environment constraints and pos-sibly properties which serve as test guides (guiding properties). The environmentconstraints de�ne the valid environment behaviors. Guiding properties de�ne asubset of the environment behaviors which is supposed to be interesting for thetest. Both are exhibited by the user. Notice that, from the tester's viewpoint,guiding properties are considered as an environment extension. This allows usto have a uniform framework to de�ne both environment constraints and test-ing guides. Moreover, all the testing techniques perform a random generationof values from a domain according to constraints. For this reason, they are allimplemented as constrained random generation techniques.The test is operated on a single action-reaction cycle, driven by the gener-ator. The generator randomly selects an input vector for the unit under testand sends it to the latter. The unit under test reacts with an output vector andfeeds back the generator with it. The generator proceeds by producing a newinput vector and the cycle is repeated. The oracle observes the program inputsand outputs, and determines whether the software speci�cation is violated. Thetesting process is stopped when the user-de�ned length of the test sequence isreached.The construction of the generator is carried out in two steps. First, the envi-ronment description (i.e. the environment constraints and the guiding properties)is compiled into a �nite state automaton [9]. This automaton recognizes all in-put and output value sequences satisfying environment constraints. Then, thisautomaton is made indeterministic and transformed into a generator.Lutess has a user-friendly interface, which o�ers the user an integrated envi-ronment:{ to de�ne the testnode, the oracle and the unit to be tested,{ to command the construction of the test harness, to compile Lustre pro-grams, and to build constrained random generators,{ to run the testing process, to set the number and the length of the datasequences, and to replay a given sequence with a di�erent oracle,{ to visualize the progression of the testing process and to format the sequencesof inputs, outputs and verdicts,

{ to abstract global results from the sequences for both testing e�ciency anal-ysis and software reliability evaluation.ExampleLet us consider the validation of a telephony system simulation program (�g. 2)....

Telephony System simulation program

Environment interfaceFigure2. Telephony System simulation programThe environment of this program is composed of physical telephones. It ischaracterized by the events issued by the devices (On-Hook, O�-Hook, Dial...),and the signals emitted by the program (e.g. the classical tones such as Dialing-tone, Busy-tone...).. Examples of environment constraints:{ at most one action can be carried out on a phone at each instant of time,{ one cannot go o� (resp. on) the hook twice, without going on (resp. o�) thehook in between,{ one can dial only after the phone has received a Dialing tone.. Examples of oracle properties:{ outputs are correctly produced,{ a phone never receives two di�erent tones at the same time (tone outputsare mutually exclusive).In the following section, this example is detailed to illustrate the di�erent testingmethods.3 Testing methodsThis section presents the various testing techniques provided by Lutess.Basic testingThis method corresponds to an environment simulator. Test data is selectedonly according to the environment constraints. Therefore, the test data selectioncriterion is the weakest one can de�ne for synchronous software. The test data

generation is performed in such a manner that the data distribution is uniform.However, for complex systems, a uniform distribution is far from the reality:realistic environment behaviors may be a small subset of all valid behaviors. Thefollowing three methods aim at providing a solution to this problem.Statistical testingIn [15], Whittaker has shown that, to be useful, the description of the operationmodes of a piece of software must contain multiple probability distributions ofinput variables.The problem of the operational pro�le is that the user should de�ne it com-pletely. Usually, building a complete operational pro�le is a useless e�ort fortwo reasons. In industrial contexts, the speci�cations are most often incomplete.Furthermore, the user has only a partial knowledge of the environment charac-teristics.To bypass this drawback, Lutess o�ers facilities to de�ne a multiple prob-ability distribution in terms of conditional probability values associated withthe unit under test input variables [10]. The variables which have no associatedconditional probabilities are assumed to be uniformly distributed.An algorithm to translate a set of conditional probabilities into an operationalpro�le (and vice versa) is described in [4].. Examples of conditional probabilities:{ the probability that someone dials his own number is low.{ the probability for someone to go on the hook is high when the busy tonehas been received.Property-oriented testingA tester may want to test a program against important program properties re-gardless of any input distribution. In this case, the testing process is directedtoward the violation of the properties. Such properties are, for example, safetyproperties which state that something bad never happens. Random generationis not well-adapted when the observation of such properties corresponds to veryfew data sequences. The actual aim of property-oriented testing is to analyzethose properties and to automatically generate relevant input values, i.e. valuesthat are the most liable to cause failure with respect to these properties.Let's consider the simple property P : i) o, where i is an input and o is anoutput. P states that o must be true every time i is true. P holds for i = false.So, when i is false, the program has no chance to violate P. Hence, to have agood chance to reveal an error, the interesting value for i is true [12]. A similaranalysis is carried out by an automated analyzer on any safety property, allow-ing to characterize the relevant input values for the property to be taken into

account.It must be noticed that property-oriented testing is always applied with testdata which satisfy the environment constraints.This technique is however limited, since it consists in an instantaneous guid-ing. If we consider a safety property like pre i ) o2, the analysis won't revealthat setting i to true will test the property at the following step.. Example of property:{ if one goes o� the hook on a phone which was previously idle, then the dialingtone should be received at this end-point.Behavioral pattern-based testingAs complexity grows, reasonable behaviors for the environment may reduce to asmall part of all possible ones with respect to the constraints. Some interestingfeatures of a system may not be tested e�ciently since their observation mayrequire sequences of actions which are too long and complex to be randomlyfrequent.The behavioral pattern-based method aims at guiding further input gener-ation so that the most interesting sequences will be produced. A behavioralpattern characterizes those sequences by listing the actions to be produced, aswell as the conditions that should hold on the intervals between two successiveactions. Regarding input data generation, all sequences matching the patternare favored and get higher chance to occur. To that, desirable actions appearingin the pattern are preferred, while inputs that do not satisfy interval conditionsget lower chance to be chosen. Unlike the constraints, these additional guidelinesare not to be strictly enforced. As a result, all valid behaviors are still possible,while the more reasonable ones are more frequent. The model of the environmentis thus more \realistic".Here again, the generation method is always applied with test data which satisfythe environment constraints.. Example of a (simple) behavioral pattern:{ Let us consider the following pattern (t: instant, l: interval), which guidesinput generation so that user A will call user B when the latter is talking toC:(t) B and C should be talking together,(l) B and C should not go on the hook,(t) A should be previously idle and should go o� the hook,(l) B and C should not go on the hook,(t) A should dial B's number.2 pre i is a Lustre expression which returns the previous value of i.

4 FoundationsThis section presents the foundations of the techniques described above. A con-strained random generator is de�ned formally as a generating machine and aselection algorithm to determine the values which are sent to the unit undertest. A generating machine is an I/O machine (de�nition 1), whose inputs (re-spectively outputs) are the unit under test outputs (resp. inputs).De�nition 1 An I/O machine is a 5-tuple M = (Q; qinit; A;B; t) where{ Q is a �nite set of states,{ qinit 2 Q is the initial state,{ A is a set of input variables,{ B is a set of output variables,{ t : Q� VA � VB ! Q is the transition (possibly partial) function.In the following, for any set X of boolean variables, VX denotes the set ofvalues of the variables in X. x 2 VX is an assignment of values to all variablesin X.4.1 Basic generating machineThe following de�nition is inspired mainly from [13]. A similar approach can befound in [7] and [14] to deal with the formal veri�cation problem.A constrained random generator must be a reactive machine. A reactivemachine is never blocked: in every state, whatever the input is, a new outputcan be computed to enable a transition. This means that the generator is alwaysable to compute a new input for the unit under test.De�nition 2 A generating machine, i.e. a machine associated with a constrainedrandom generator, is an I/O machine Menv = (Q; qinit; O; I; tenv; �env) where{ O (resp. I) is the set of the unit under test output (resp. input) variables.{ Q is the set of all possible environment states. A state q is an assignmentof values to all variables in L, I, and O (L is the set of the testnode localvariables),{ �env � Q� VI represents the environment constraints,{ tenv : Q� VO � VI ! Q is the transition function constrained by �env,tenv(q; o; i) is de�ned if and only if (q; i) 2 �env.Behavior outline: A basic generating machine operates in a cyclic way as fol-lows: from its current state, it chooses an input satisfying �env for the unit undertest, gets the unit response and �nally uses this response to compute its newstate. In each state, the possible inputs have an equally probable chance to beselected.Remark 1: The environment constraints env may be given by the user as for-mulas involving output variables. But, at each cycle, these constraints can onlydepend on the output variable previous values. That justi�es the de�nition of

�env.Remark 2: The variables from L are intermediate variables created for theenvironment constraint evaluation. For example, they can be used to store theoutput variable previous values.Remark 3: If the machine constrained by env is not reactive, we consider thatthere is an error in the expression of the constraints. The user is always informedof this problem, even if it is sometimes possible to transform env and make itgenerating. Indeed, the means to determinate whether env is generating is tocompute the set of reachable states, or its complement (i.e. the set of states lead-ing inevitably to the violation of env). These computations are based on a least�xed point calculation which can be impracticable in some cases. Nevertheless,the constrained random generator can always operate since it detects blockingsituations. It is the responsibility of the tester to rewrite the constraints.4.2 Statistical-guided machineStatistical testing enables a Lutess user to specify a multiple probability dis-tribution in terms of conditional probability values associated with the inputvariables of the unit under test.De�nition 3 A statistical-guided machine is de�ned as Mstat = (Menv;CPL)where{ Menv = (Q; qinit; O; I; tenv; �env) is a basic generating machine,{ CPL = (cp0; cp1; :::; cpk) is a list of conditional probabilities associated withMenv,{ each cp is a 3-tuple (i; v; fcp) where i is an input variable (i 2 I), v is aprobability value (v 2 [0::1]), and fcp is a condition (fcp : Q � VO � VI !f0; 1g). v denotes the probability that the variable i takes on the value truewhen the condition fcp is true.Behavior outline: A statistical-guided machine has the same behavior as thebasic generating machine: it selects an input satisfying �env with the proba-bility speci�ed by the list of conditional probabilities. By default, it uses theequally probable distribution. When the conditional probability list is empty,the machine is equivalent to the basic one.4.3 Property-guided machineIn property-oriented testing, test data are selected in order to facilitate thedetection of property violations. The following de�nition aims at characterizingsuch test data:De�nition 4 LetMenv = (Q; qinit; O; I; tenv; �env) be a generating machine andfP � Q � VO � VI be a predicate representing a property P . The fact that a

software input value i 2 VI (adequately) tests P on state q 2 Q is de�ned asadequateP (i,q): adequateP (i,q) i� 9o 2 VO; fP (q; o; i) = falseThat is, input data for which the properties are true (regardless of the valuesof state and output variables) are not able to adequately test these properties.De�nition 5 A property-guided machine is de�ned as MP = (Menv; P ) where{ Menv = (Q; qinit; O; I; tenv; �env) is a generating machine,{ P is a conjunction of properties,Behavior outline:Whenever it is possible to produce an input value which ad-equately tests the properties, a property-guided machine ignores all input valueswhich do not test adequately these properties.Thus, from the current state q, whenever possible, the machine chooses an inputi satisfying �env(q; i)^adequatep(q; i); otherwise it chooses i such as (q; i) 2 �env.Note that this machine has the same transition function than the basic generat-ing machine.4.4 Pattern-guided machineA behavioral pattern (BP) is made out of alternating instant conditions andinterval conditions. The instant conditions must be satis�ed one after the other.Each interval condition shall be continually satis�ed between the two successiveinstant conditions which border it. A behavioral pattern characterizes the classof input sequences that match the sequence of conditions.A behavioral pattern (BP) is built with the following syntax rule, where (SP)is a Lustre boolean expression which does not include the current outputs:BP ::= [SP ]SP j [SP ] SP BPThe meaning of a BP is that one wants the sequence of non-braced predicatesto hold one after the other, and the braced predicates to hold continually inbetween. [true]X[Y]Z means \X should hold, then Z, and not(Y) should notoccur in the meantime". This provides a means to describe only signi�cant partsof an event sequence.With a behavioral pattern, a progress variable is associated. It indicates whatpre�x of the BP has been satis�ed so far. To any value of this variable correspondsa pair of predicates (inter, cond). cond is the next-to-appear predicate and interis the predicate that should continually hold in the meantime.De�nition 6 A pattern-guided machine is de�ned as MBP = (Menv; BP ) where:{ Menv is a generating machine,{ BP is a behavioral pattern, BP = f(interk; condk)gk; k 2 [0; : : : ; n].

Behavior outline: Let j be an integer variable taking its values in [�1; : : : ; n].j represents a progress index on BP. j is initialized to 0.Let SH(q; j); SL(q; j); SN (q; j) be three input variable sets de�ned as:{ SH(q; j) = fi 2 VI j (q; i) ` cond jg{ SL(q; j) = fi 2 VI j (q; i) 0 interj(q; i) ^ (q; i) 0 cond jg{ SN (q; j) = fi 2 VI j (q; i) ` interj ^ (q; i) 0 cond jgAt the current state, a pattern-guided machine �rst chooses one non emptyinput variable set. Second, it selects an input i in the chosen set. The machinecomputes the progress index value as follows:{ if SH(q; j) was chosen, j j + 1,{ if SN (q; j) was chosen, j is unchanged,{ if SL(q; j) was chosen, j �1.If j becomes equal to -1 or n+1, the pattern is negatively or positively �nished,and then j is reset to 0.Intuitively, transition function of the basic machine is partitionned into threeclasses. The partition is motivated by the status of the transitions regarding theprogression of the guiding process: SH identi�es all transitions that make theprocess go forward, SL identi�es those that lead to the process stopping, whileSN identi�es all transitions that do not a�ect the process.5 Some implementation issuesThe automaton obtained by compiling the environment constraints is coded us-ing a symbolic notation in which the states are represented by a set of variables,and the transitions by boolean functions. For a given state and a given value ofthe inputs and outputs, one can check whether the environment constraints aresatis�ed.The boolean functions are implemented as a single Binary Decision Diagram(BDD) [1]. At every instant, the test data generator uses the environment de-scription to randomly choose input values which make the constraints satis�ed,so that the associated boolean function takes a true value.Each node of the diagram carries a variable and each of its outgoing branchesis labeled with the value taken by that variable. Variables are ordered as follows:the lower order variables are the local variables, next come output variables whileinput variables occur at the bottom of the diagram. As a result, it is easy tolocate the sub-diagram corresponding to a given state of the environment sinceit is fully de�ned with the local variables.Basic constrained random generationThe basic random input generation algorithm produces equally probable inputvalues. It consists in two steps. The �rst one is a recursive traversal which labelsthe environment BDD nodes with a couple of integers. This couple indicates howmany valid input states can be reached from the associated node. The value of

the variable ei is set with respect to the label (v0; v1):p(ei = true) = v1v0 + v1 and p(ei = false) = v0v0 + v1The second step of the algorithm produces the input vector at each cycle. Tothat, the generator performs the four operations below:{ locate, in the diagramdescribing the environment constraints, the sub-diagramcorresponding to the current values of the state,{ generate a random value for the software inputs satisfying the boolean func-tion associated with that diagram,{ read the new software outputs,{ compute the next state by computing the next value of each state variable.In other words, the generator searches in the diagram associated with theconstraints a path leading to a true leaf.Statistical testing generationThe statistical testing-oriented generation produces input data using both theprevious BDD labeling and the conditional probability list.Let CP (e) be a list of conditional probabilities associated with the inputvariable e. CP (e) =((p1; ce1),(p2; ce2)...(pr; cer)). In CP (e), pj denotes the prob-ability that the variable e takes on the value true when the condition cej is true.The selection function assigns a value to e according to the following algorithm:8>><>>:p(e = true) = if ce1 then p1 else if ce2 then p2 else:::if cer then pr else v1v0+v1with v1 and v0 refering to the basic labelingp(e = false) = (1 � p(e = true))Property-oriented testingThis technique is implemented by building a new BDD from the environmentconstraints and the properties to be tested. On this BDD, one can check whethera given state and a given value of the inputs both satisfy the environment con-straints and are liable to exhibit an error with respect with the properties. Thebasic algorithm is modi�ed as follows:{ locate, in this late diagram, the sub-diagram corresponding to the currentvalue of the state,{ check whether there exists at least one value for the inputs which can leadto a true leaf in this diagram,{ if positive, randomly select one of these values; otherwise, perform the basicalgorithm.Behavioral pattern-based generationGiven the pattern to be matched, the method drives the generator to considerat every cycle the pair of predicates (inter, cond) corresponding to the currentvalue of the progress variable. At each step, �rst, the input space is computedto get all the possible inputs meeting the environment speci�cation. It is thendivided in three categories:

{ inputs which can satisfy cond, called H;{ inputs falsifying inter, called L;{ inputs in none of the two �rst categories, called N .A probability is assigned to each category so that an input in the �rst one wouldbe favored over an input in the third category, which, itself, would be preferredto an input from the second category.These probabilities are determined with respect to the cardinality of eachpartition and to given weights associated with them:wH,wL andwN . A partitionis said to be of higher priority than an other if its weight is greater.The input selection is a two-step process. First, one has to select a categoryaccording to the determined probabilities. Each category c in C=fH, L, Ng hasa probability pc of being selected:pc = wc � card(c)Pj2C wj � card(j)Then, an input is chosen in an equally probable manner from the selected cate-gory. As a result, the probability for any input i in c to be chosen is pi;c:pi;c = 1card(c) � pc = wcPj2C wj � card(j)The implementation of the algorithm is also based on the environment BDDEnv. Each predicate in the pattern is represented by a BDD, called Condi forthe ith predicate to be satis�ed, and InterCondi for the ith intervall predicate.With each value i of the progress variable are associated three BDDs. TheseBDDs are computed as follows:EnvH(i) = Env ^CondiEnvL(i) = Env ^ :InterCondiEnvN (i) = Env ^ :Condi ^ InterCondiEach of these BDDs indicates whether a given state and a given value of theinputs satisfy both the environment constraints and the given predicate. TheseBDD are labeled in the very same manner than for the basic generation.Every generation step involves therefore the traversal of the three diagramscorresponding to the current value of progress. The traversal leads to the subdi-agrams corresponding to the current environment state, where the cardinalitiesof H, L and N can be retrieved, thanks to the labelling. The selection is thenperformed with respect to the given weights and obtained cardinalities.

6 ApplicationLutess has been used since 1996 [13]. Its latest developments (statistical andbehavioral pattern based testing) have shown their practicalness and their e�-ciency on industrial case studies conducted in partnership with CNET/FranceTelecom.The experimentation has consisted in the validation of a synchronous modelof a standard telephony system. Given the basic call service and additional ser-vice speci�cations, the goal was to detect possible and undesired interactionsbetween those services [5]. The speci�cation were drawn from the ITU Recom-mendations. This experiment has concerned �ve services.Another experiment has addressed twelve services in the framework of the\Feature Interaction Detection Tool Contest" held in association with the 5thFeature Interaction Workshop" [6]. The goal of this contest was to comparedi�erent feature interaction detection tools according to a single benchmarkcollection of features.Both experiments have shown that the use of Lutess is well adapted to thefeature interaction problem. In particular, Lutess has won the \Best Tool Award"of the forementioned contest, since its use has allowed L. du Bousquet and N.Zuanon to �nd the largest number of interactions.The synchronous approach has led to concise validations, thanks to the re-duced number of states in the model. Indeed, all transitions are observable. Theexecutable model is of higher abstraction and avoids the state space explosionproblem. As a consequence, �ve execution steps are enough to initiate a call,while two steps su�ce to terminate a communication. In addition to that, Lus-tre is able to generate naive C code, instead of a structured automaton. Thisavoids the state space explosion even further.On the average, a 10 000-step test case takes about one hour, depending on theenvironment complexity, on a Sparc Ultra-1 station with 128 MB RAM.Practical results have demonstrated that the guiding techniques were ex-cellent at �nding problems involving rare scenarios. The case studies have alsocon�rmed that Lutess can be valuably applied at an earlier stage in the softwarevalidation, in order to tune speci�cations.Experimentation has however highlighted the following drawbacks. It hasclearly shown that the more detailed the properties, the higher the chances todetect a problem. However, detailing properties is not always possible, and/orwould lead us away to the user's view which, so far, we have tried to favor. Onehas therefore to �nd a good balance for the property precision level. In addition,specifying the software environment by means of invariant properties is a ratherdi�cult task. Indeed, one should adequately choose a set of properties which donot \overspecify" the environment. Overspecifying may prevent some realisticenvironment behaviors from being generated. Conversely, a loose speci�cationmay cause the generation of invalid environment behaviors.

References1. S.B. Akers. Binary Decision Diagrams. IEEE Transactions on Computers, C-27:509{516, june 1978.2. J. Bicarregui, J. Dick, B. Matthews, and E. Woods. Making the most of formalspeci�cation through animation, testing and proof. Science of computer program-ming, 29(1-2), july 1997.3. P. Caspi, N. Halbwachs, D. Pilaud, and J. Plaice. LUSTRE, a declarative lan-guage for programming synchronous systems. In 14th Symposium on Principles ofProgramming Languages (POPL 87), Munich, pages 178{188. ACM, 1987.4. L. du Bousquet, F. Ouabdesselam, and J.-L. Richier. Expressing and implementingoperational pro�les for reactive software validation. In 9th International Sympo-sium on Software Reliability Engineering, Paderborn, Germany, november 1998.5. L. du Bousquet, F. Ouabdesselam, J.-L. Richier, and N. Zuanon. Incremental fea-ture validation : a synchronous point of view. In Feature Interactions in Telecom-munications Systems V. IOS Press, 1998.6. N.D. Gri�eth, R. Blumenthal, J.-C. Gregoire, and T. Otha. Feature interactiondetection contest. In K. Kimble and L.G. Bouma, editors, Feature Interactions inTelecommunications Systems V, pages 327{359. IOS Press, 1998.7. N. Halbwachs, F. Lagnier, and P. Raymond. Synchronous Observers and the Veri�-cation of Reactive Systems. In M. Nivat, C. Rattray, T. Rus, and G. Scollo, editors,Third Int. Conf. on Algebraic Methodology and Software Technology, AMAST'93,Twente. Workshops in Computing, Springer Verlag, june 1993.8. D. Hamlet and R. Taylor. Partition Analysis Does Not Inspire Con�dence. IEEETransactions on Software Engineering, pages 1402{1411, december 1990.9. F. Ouabdesselam and I. Parissis. Testing Synchronous Critical Software. In5th International Symposium on Software Reliability Engineering, Monterey, USA,november 1994.10. F. Ouabdesselam and I. Parissis. Constructing operational pro�les for synchronouscritical software. In 6th International Symposium on Software Reliability Engineer-ing, pages 286{293, Toulouse, France, october 1995.11. F. Ouabdesselam, J.-L. Richier, and N. Zuanon. Using behavioral patterns forguiding the test of service speci�cation. technical report PFL, IMAG - LSR,Grenoble, France, 1998.12. I. Parissis and F. Ouabdesselam. Speci�cation-based Testing of Synchronous Soft-ware. In 4th ACM SIGSOFT Symposium on the Foundation of Software Engineer-ing, San Francisco, USA, october 1996.13. Ioannis Parissis. Test de logiciels synchrones sp�eci��es en Lustre. PhD thesis,Grenoble, France, 1996.14. P. Ramadge and W. Wonham. Supervisory Control of a Class of Discrete EventProcesses. SIAM J. CONTROL AND OPTIMIZATION, 25(1):206{230, january1987.15. J. Whittaker. Markov chain techniques for software testing and reliability analysis.Thesis, University of Tenessee, May 1992.