14
Computer Supported Cooperative Work:The Journal of Collaborative Computing 5: 323-336, 1996. 323 @ 1996 KluwerAcademic Publishers. Printedin the Netherlands. Constructive Difference and Disagreement: A SuprA-Cooperation among Agents CATHERINE TESSIER and LAURENT CHAUDRON Cert- Onera, 2 Avenue Edouard-Belin, 31055 Toulouse Cedex, France E-mail: {Catherine. Tessier, Laurent. Chaudron} @ cert.fr Abstract. Differences among agents may be constructive in so far as they can bring solution enhance- ments or conflicts, the second case leading to solution modifications. What is dealt with in this paper is a cooperation involving different rational agents resulting in more than a mere addition of the agents' individual skills, thanks to a process of approval and refutation of the current solution. Lakatos' work is taken as a basis and adapted to a set of cooperating agents, so as to define the concept of suprA-cooperation and the corresponding interaction model. Three case studies are given, involving suprA-cooperating human or artificial agents. Key words: Multi-agent systems, rational agents, cooperation, philosophical foundations, Lakatos, interaction model 1. Introduction: a typology of cooperating rational agents Implementing cooperating agents to solve problems may be contemplated either with fine grain agents or with coarse grain or rational agents* (Erceau et al., 1994). In the first case, the number of agents is high, they all have the same capabilities, which are quite limited and the result is obtained through energy pooling. In the second case, the number of agents is low, their skills are extensive and the result is obtained through skill pooling. Rational agents may be classified according to two criteria: (1) they are all of the same type, or they are different; (2) their cooperation gain is additive or non-additive (see Figure 1). What different (or heterogeneous (Decker et al., 1988)) means is that the knowl- edge involved in each agent refers to quite different aspects of the problem to be solved. If we consider fault diagnosis for example, the human being has most often a teleological or a high level functional view of the system to be monitored, where- as artificial agents may describe detailed functional or behavioural knowledge. Consequently, knowledge representation and structure may not be common among the different agents: what is just necessary is that agents that are likely to interact should understand themselves and therefore that they share a common language * Rational agents are capable of representing objects, properties on these objects, sentences combining objects and properties, and of communicating them to other agents (Sallantin et al., 1991).

Constructive difference and disagreement: A suprA-cooperation among agents

Embed Size (px)

Citation preview

Computer Supported Cooperative Work: The Journal of Collaborative Computing 5: 323-336, 1996. 323 @ 1996 Kluwer Academic Publishers. Printed in the Netherlands.

Constructive Difference and Disagreement: A SuprA-Cooperation among Agents

C A T H E R I N E T E S S I E R and L A U R E N T C H A U D R O N Cert- Onera, 2 Avenue Edouard-Belin, 31055 Toulouse Cedex, France E-mail: {Catherine. Tessier, Laurent. Chaudron} @ cert.fr

Abstract. Differences among agents may be constructive in so far as they can bring solution enhance- ments or conflicts, the second case leading to solution modifications. What is dealt with in this paper is a cooperation involving different rational agents resulting in more than a mere addition of the agents' individual skills, thanks to a process of approval and refutation of the current solution. Lakatos' work is taken as a basis and adapted to a set of cooperating agents, so as to define the concept of suprA-cooperation and the corresponding interaction model. Three case studies are given, involving suprA-cooperating human or artificial agents.

Key words: Multi-agent systems, rational agents, cooperation, philosophical foundations, Lakatos, interaction model

1. Introduction: a typology of cooperating rational agents

Implement ing cooperat ing agents to solve problems may be contemplated either

with f ine grain agents or with coarse grain or rational agents* (Erceau et al., 1994).

In the first case, the number of agents is high, they all have the same capabilities, which are quite l imited and the result is obtained through energy pooling. In the

second case, the number of agents is low, their skills are extensive and the result is

obtained through skill pooling.

Rational agents may be classified according to two criteria: (1) they are all

of the same type, or they are different; (2) their cooperat ion gain is additive or

non-additive (see Figure 1). What different (or heterogeneous (Decker et al., 1988)) means is that the knowl-

edge involved in each agent refers to quite different aspects of the p rob lem to be

solved. I f we consider fault diagnosis for example, the human being has mos t often

a teleological or a high level functional view of the sys tem to be monitored, where- as artificial agents may describe detailed functional or behavioural knowledge. Consequently, knowledge representat ion and structure may not be c o m m o n among the different agents: what is just necessary is that agents that are likely to interact

should understand themselves and therefore that they share a c o m m o n language

* Rational agents are capable of representing objects, properties on these objects, sentences combining objects and properties, and of communicating them to other agents (Sallantin et al., 1991).

324 C. TESSIER AND L. CHAUDRON

+ +

Figure 1. A typology of rational agents

Figure 2. Partition of the search space by agents A1 and A2.

(Decker et al., 1988). Another point is that the agents' individual goals need not be necessarily the same. For instance, if a person's purpose is to detect and localize the failures in a physical system, the goal of the agent having a behavioural view is most probably to maintain the consistency among the values of the variables, according to physical laws.

The notion of additive or non-additive gain is proposed as a qualification of how the cooperation of the agents allows the problem to be solved to be brought a better solution.

An additive gain is related to a cooperation enhancing the performance of problem solving, that is to say the solution is obtained more quickly or within a larger bandwidth than it would be with a single agent. The gain may be measured in terms of computation time or number of solutions found. For the case of agents of the same type, the initial problem to be solved is decomposed in several identical and quasi-independent subproblems, which are solved in parallel, each agent having one subproblem to solve. In that case, the agents' cooperation allows apartition of the search space to be made (see Figure 2).

When the agents are different, some of them can focus the attention of the others on relevant part of the search space, thus allowing a solution to be reached more rapidly. This is generally the case for blackboard systems. Here, the agents' cooperation allows a restriction of the search space to be made (see Figure 3).

A non-additive gain is related to a cooperation enhancing the quality of the solution, or indeed allowing a solution to be found. A solution that would not be found, or that would be rather poor when searched by a single agent, can be found or refined through cooperation: this allows a real construction of the search space to be made. Contrary to the additive gain, a non-additive gain cannot be numerically

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT

A2

Figure 3. Restriction of the search space of agent A1 by agent A2.

A2 ~ A1 © ©

Figure 4. Construction of the search space by agents A1 and A2.

325

measured in so far as it is more a matter of construction of a supremum of several initial theories, the result of which cannot be compared to those initial theories (just as a linear combination of two free vectors cannot be numerically compared to them). Different means of non-numerical "measures" may be defined on a non- additive model of cooperation: validity, relevance . . . (Sallantin et al., 1991a). A non-additive gain cooperation cannot be considered but between different agents since, by definition, similar agents can only find identical solution to a given problem.

In this framework of construction of a solution by rational agents, we are now going to define the suprA-cooperation concept (see Figure 4).

2. SuprA-cooperation

2.1. DEFINITIONS AND PROPERTIES

DEFINITION. A suprA-cooperation (suprA in the sequel) is characterized by suprA = rational + difference + non-additivity.

Difference allows the complementarity of the rational agents to be exploited: a solution is obtained thanks to toings and froings among agents, the result of which is a real enrichment (each agent can start up again from the others' results or criticisms, and improve its own point of view). This contributes to a large extent to avoid incomplete or partially erroneous results, or simply to perform a level of analysis which would be impossible using a single point of view (Hunt and Price, 1991), for instance to be able to reason on many different conceptual levels (Sticklen and Chandrasekaran, 1990).

326 C. TESSIER AND L. CHAUDRON

Non-additivity allows something more than a mere concatenation of the agents' skills to be obtained. It is performed thanks to an approval-refutation process defined as follows:

DEFINITION. Approval = acceptance + example(s); refutation = rejection + counter-example(s).

Since we consider agents that do not have necessarily the same goal, each of them must have the capability to communicate its agreements or disagreements about any question, as it is the case for natural cognitive communication. One particular consequence is that the suprA agents must have a common negation symbol. Furthermore, groundless acceptance or rejection are not sufficient in a cognitive cooperation context: each agent has to justify every element of its dis- course (Grice, 1975). Hence, a suprA agent must be able to explain its (positive or negative (Brandt, 1990)) standpoints through elements of knowledge dedicated to refine the cognitive matter of cooperation; these refinement items may be called: examples (or counter-examples), explanations, justifications, proofs, demonstra- t ions . . . Consequently, the suprA agents must have a common implication symbol with a common semantic.

REMARK. Those properties represent the minimal set of capabilities required for a rational agent to be efficient within a semi-empirical theory (Sallantin et al., 1991a, 1991b).

Those definitions lead to the following general properties of suprA:

PROPERTIES (approval/refutation) implies different agents (App/Ref 3 ¢ ) . (approval/refutation) is equivalent to suprA (App/Ref e=~ suprA).

Some conceptual justifications are the following:

(App/Ref ~ ¢ ) : if agent A2 is able to discover the same result as agent A1 with different performance, or to find counter-examples to A1 's result, then agent A2 is necessarily different from A1.

(App/Ref ~ suprA): an App/Ref interaction supposes the existence of a negation operator; as a logical operator of implication is necessary to allow the agents to link their information, the agents are rational; furthermore, the cooperation gain cannot be numerical in so far as it is expressed through refinements and counter- examples. Hence App/Ref ~ rational + non-add; as (according to the first property) App/Ref ~ ¢ , then we have App/Ref ~ suprA.

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT 327

(App/Ref ~ suprA): suprA supposes the existence of a characteristic to declare the agents to be different; this characteristic may be chosen as a negation operator. In the same way, the non-additivity may be used as a refinement operator. Negation + Refinement should induce a complete App/Ref model.

REMARK. These rough but strong arguments are also used by Spencer-Brown to justify his minimal algebra defining the autonomy of an agent (Varela, 1979) in which the negation symbol is the core.

Disagreement situations are usually searched to be avoided in multi-agent sys- tems (Lander and Lesser, 1993); as we postulate disagreement to be a constructive way, we will now describe the philosophical foundations of our approach.

2.2. A CONCEPTUAL FRAMEWORK: "LAKATOS"

Lakatos Imre (1922-1973) was a mathematician and a philosopher whose work was devoted to the empiricism in mathematics (Lakatos, 1986). From a classical philosophical point of view, mathematical knowledge is a priori and infallible; on the contrary, in natural sciences, knowledge is a posteriori and fallible. Lakatos tried to bridge the gap between both conceptions of sciences, claiming for a quasi- empiricism of mathematics which can be roughly described as follows: any con- sistent mathematical result is to be considered as a conjecture which proof must be submitted to a sort of practical experience: the demonstration.* The experimental setup is the mathematical community. Just as nature provides "hard facts" (also called "potential falsifiers" (Lakatos, 1986, p. 39) or "rocky sea-bed" (Davis and Hersch, 1982)) to experimental sciences, the mathematical community provides potential falsifiers to mathematics. If the demonstration is accepted, the experience succeeds, then the conjecture is considered as a theorem which is refined by many corollaries and examples. If a counter-example appears, the experience fails and the conjecture (or its proof) has to be refined. Lakatos' major contribution rests on the original conceptualization ~ of such refinements (Lakatos, 1976): when a conjecture and its proof (say: a theory T) seem to be falsified by a counter-example 6 three solutions arise: rectification (1), exception (2), search for a guilty-lemma (3). (1) If the counter-example 6 is considered as an error, it has simply to be rectified

in 6': it is the common treatment of all numerical miscalculations: the new theory is still T.

* In mathematics, proving may be considered either as a formal object, the proof, or as a pragmatic action, the demonstration. Formal theory or social process, the mathematical practice of demonstration is the main part of the epistemology of mathematics (Tymoczko, 1986; Davis and Hersch, 1982; Balacheff, 1987; Chaudron, 1990). See more precisely Lakatos again: "What does a mathematical proof prove?", in (Tymoczko, 1986, pp. 153-162).

** Which we will denote "Lakatos".

328 C. TESSIER AND L. CHAUDRON

(2) In many cases, the counter-example has to be considered as an exception: "Birds fly" is the conjecture, "The apterix does not fly" is a counter-example because the apterix is a ratite bird. As the conjecture and the counter-example have the same level of validity, a suitable way is to declare an exception:* "Birds fly (except the apterix)": the new theory is T A except (6).

(3) Lakatos' third way is a sort of cognitive surgery in the body of the conjecture in order to find and rectify a so-called "guilty-lemma" A responsible for the contradiction to the counter-example. The conjecture is modified, the ex- counter-example is integrated to form a new theory: IT - {),}] U {c~}. New examples appear. Sometimes a new concept appears during the rectification of the proof, it is called a pro@concept.

Lakatos' refinement process is therefore taken as a basis for suprA-cooperation, and allows an interaction model to be proposed.

2.3. THE SUPRA INTERACTION MODEL

The interaction model is proposed thanks to a state-based representation as in (Lander and Lesser, 1993), the difference being that instead of the point of view of the solution states, it is the dual point of view of the agents' actions that is considered (see Figure 5).

The interaction model is given for two agents, but can be extended to a higher number of agents (an example with three agents is given in the sequel).

When a problem arises, an agent that is able to deal with it (say agent 1) performs a reasoning and proposes the result to the other agent (agent 2). Agent 2 reasons in turn, i.e. tries to understand the conjecture, to experiment it, to prove it with its own means or to refute it partially or totally by testing possible counter-examples, in order to see if agent l ' s result is actually true within its own cognitive model. Agent 2 may then approve agent 1 's result and refine it by producing examples, or refute the result and falsify it with counter-examples, or simply approve or refute without being able to do more. In the first two cases, agent 2's results are again submitted to agent 1, following the same process (the particular case of agent 1 approving agent 2's counter-examples corresponding to Lakatos' third way). If one of the agents approves and cannot do more, the process ends on a common theory; on the contrary, if one of them refutes and cannot do more, both the result and the interaction itself are rejected and the process ends with no common result.

2.4. TOWARDS AN IMPLEMENTATION: MULTI MODEL BASED PROBLEM SOLVING AS A BASIS

A question that has to be asked is how suprA-cooperation can be implemented. A cooperation involving various different agents supposes that several models

are available; therefore, one of the key problem is to actually design the models:

* Some authors give more precise definitions of counter-examples (Sallantin et al., 1991a, p. 37).

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT 329

reo.gor~ reasor~v propose~ a p p r o v e s , ~ reji, tes,

/ / r~-,l, "7- ..... u -7- / I z i

refines ;:iiiiiil iiii! falsifies

(( \\ ,ro ..... i

Figure 5. A Petri net representation of the suprA interaction model.

how relevant agents' models, that is to say models that will allow a true suprA- cooperation to be achieved, have to be designed?

An answer to this question can be found in multi-model based reasoning about physical systems. For instance, (Chittaro et al., 1993), noticing that no methodol- ogy exists for designing different models of the same system, propose a general framework to study the critical points, among which are the specific reasoning tasks that can be performed within each model and the relationships among the different models: how the different representations are linked, how the set of the models can be a consistent representation of the physical system, how partial results can be exported and imported. In the same way, (Oliviera et al., 1993) propose a two-layer agent: the self-model (or model of itself) represents the agent's own knowledge and skills; the cooperation layer represents the knowledge that is "interesting" for any other agent involved in solving the global problem, allowing the agent's activity to be controlled and coordinated with the others' activities.

Though they generally more belong to the class of additive gain cooperation, many examples in various domains of multi-modeling for problem solving may be quoted: (Lefbvre and Pollet, 1993) for object recognition in multi-source images, (Gams et al., 1993) for multi-classification, (Perret, 1988) for knowledge associa- tions in biology, (Duffaut, 1994) for diagnosis test-tree generation. . .

330 c. TESSIER AND L. CHAUDRON

Three case studies are now going to be given to illustrate the interaction model. The last one shows how a suprA-cooperation has been implemented in the field of fault diagnosis of physical systems.

3. SuprA case studies

3.1. SUPRA CASE 1: THE UNIFORM CONVERGENCE BIRTH

This first case is an historical case involving three famous mathematicians who performed a Person/Person suprA-cooperation.

In 1821, Cauchy enunciated a general theorem: "The limit of a convergent series of continuous functions is continuous." Nevertheless in 1822, Fourier discovered* a convergent series of continuous functions which limit was not continuous. Many years later, a third mathematician, Seidel, found a "guilty-lemma" in Cauchy's proof. More precisely:

the curve of the limit of Fourier's series ~n~_0 1 cos(2r~ + 1)z is the following:

~4

Figure 6. Fourier's curve G.

Even if it was possible to draw G with one continuous pencil line, (which is not acceptable from a modern analysis point of view), it was impossible to say Fourier's function was strictly continuous.

Thus Cauchy was refuted by Fourier. On the one side Cauchy (who was the first to give a definition of the continuity concept) proved his theorem. On the other side Fourier (who even came to doubt of his results) actually proved his series to be (slowly but truly) convergent. Cauchy was right and Fourier was right too. The situation was not a small event. The classical "relegation of exceptions" method was the one and only solution: "Cauchy's theorem is true except for Fourier series". The consistency of mathematics was safe, but the problem was left unsolved: why was Fourier's limit not continuous?

Considered as rational agents, the antagonist models of Cauchy and Fourier gave the opportunity to perform a suprA-cooperation thanks to a third agent: Seidel. In 1847, Seidel detected a minor inaccurate part in Cauchy's proof. By precising that condition, he allowed Cauchy's theorem to be refined** and at the same time

* In fact Fourier won the Grand Prix de Math6matiques with this result in 1812, but his memoir was published later in 1822.

** The hypothesis must be: %.. the series is uniformly convergent..."

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT 331

I! ji N i!i!ili] I Foot,or I continuity

theory W / , - ~ refutes

, , o ,U .... L

counter-example eries)

~ a,~tyses

. approves

counter-example

proposes ] ~! end interaction

Figure 7. The uniform convergence Petri net.

created the new "proof-born" concept of uniform convergence (see Figure 7). This explained why Fourier's limit was not continuous though the series converged.*

3.2. S U P R A CASE 2: THE P-PENTAGONS

This second case is a Persons/Machine suprA-cooperation. Let us call P the conjecture: "In an arbitrary convex pentagon, if four diagonals

are parallel to their opposite side then the fifth diagonal is also parallel to its opposite side." P is consistent as P is true for the regular pentagon. Are there other pentagons verifying P (let us call them P-pentagons) and how to prove P? Since Euler, nobody had been studying nor solving this problem. In 1992 though, three rational agents ~ C1, C2 and C3 managed to prove P (Cuppens and Carral, 1993). + They gave several proofs and many corollaries and completely characterized the P-pentagons: P-pentagons are regular pentagons seen within a projective space. C1, C2 and Cs then generalized their results to arbitrary polygons; the work is still going on.

A question is: who found the solutions? C3 is not a real knowledge-based system, so C3 did not find any solution.

Conversely, and even if the final results were written by two human beings C1 and C2, they claimed that without C3 they would not have been able to discover the secrets of the P-pentagons.

* The series is said to be simply convergent. ~* C1 and C2 are both mathematicians, C1 is keen on geometry, C2 is keen on intelligent tutoring

systems (ITS) for mathematics. The third agent C3 is a software assistant for geometry called Cabri-gdom~tre (geometrician rough-book). Cabri defines a "geometrical micro-world" (without any reasoning capability) allowing figures with constraints to be drawn (Laborde and Capponi, 1994).

++ These authors discovered in fact that a solution had been given in 1971 by Coxeter in one of his exercise books.

332 C. TESSIER AND L. CHAUDRON

For instance, at the very beginning of the P-pentagon story, C1 and C2 drew a very simple figure on which two segments seemed to be parallel (this ensuring a proof of P); but C3 refuted C1,2 (Cuppens and Carral, 1993, p. 54): "The property is apparently true on the figure but is false in general. Do you want a counter- example?" C2 agreed and C3 constructed a counter-example which led C1 and C2 to refine their first naive proof.

A symmetrical but more sophisticated interaction occurred when another proof needed to have two other parallel segments: this time, the property submitted by C1 was accepted by C3 (and C2): "The property is true on the figure", bu t . . , was rejected by C1 himself for a lack of justification in C3's acceptance! This interaction led again to a refinement of the tested proofs and more generally of the methods of demonstration in geometry. Thus, several different kinds of proving gave more strength to the P-pentagon's results. For example: as proofs based on motion are rejected in geometry, a "visible" solution proposed by C3 had to be rejected by C1,2. In the same way, surface-based proofs are accepted by the Anglo-Saxon community but suspiciously contemplated by the Cartesian school; this obliged C1,2 to find, for each surface-based proof, an equivalent distance-based demonstration. *

3.3. S U P R A CASE 3: AUTOMOTIVE DEVICE DIAGNOSIS

The last case is an example of how suprA-cooperation can be implemented on artificial agents.

Automatic diagnosis aims at identifying faulty components or functions in a physical device when something wrong is detected, either for the device to perform an autonomous reconfiguration or to help maintenance technicians. The problem is quite complex since different kinds of knowledge have to be taken into account so that diagnoses should be as accurate as possible.The example deals with an automotive device, a screen-wash subsystem.

Let us consider a screen-wash subsystem, which is composed of a fused line from battery positive powering a pump when the switch is closed. If the fuse blows, a warning lamp comes on. The symptoms that are considered are water not being squirted at screen despite switch being closed, and warning lamp on. What is going to be shown is that the interesting failure (for maintenance) cannot be found but through a suprA-cooperation of two agents: a functional agent F and an electrical behavioural agent/3. This has been studied in details in (Tessier-Badie and Castel, 1993) and (Fiorino and Tessier, 1994).

The functional agent F implements the possible sequences of states and oper- ation modes for each function and component of the device, the evolution from one to another depending on events (control orders, faults, maintenance actions) or device external or internal conditions. The knowledge of F is described by a set of grafcets (David and Alla, 1989), which stages correspond to the states and operation modes, and receptivities of the transitions to the events and conditions.

* For the cognitive role of the figure in the geometrical demonstration see (Meilhac, 1993).

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT 3 3 3

SWITCII C[RCLqT CLOSING FUNCTION xtatcs

~ OK

flt(sw~uzh)

~ Broken

mmnte~anc¢

operaKon m o d e s

]

II ~ Opela

ON/~II" 1

I2 ~ G~ea

o~TVt2'

)o. I2~Fl

FC~ ) Cl~e d

HVF2

Figure 8. Funct ional agent F .

The goal of F is to find a marking of its stages which is as complete and accurate as possible, according to an initial partial observed state (symptoms) (see Figure 8).

The behavioural agent t3 implements the normal electrical behaviour of the device on the one hand, and the electrical behavioural context of fault occurrence on the other. The knowledge of /3 is-expressed thanks to a formalism of specifi- cations - qualitative confluences (de Kleer and Brown, 1984), the specifications corresponding to states of the device, and confluences describing the qualitative behaviour of electrical variables within those states. The goal of 13 is to consis- tently instantiate as many variables as possible, starting from an initial subset of instantiated variables.

Partial behavioural model

{ & = 0; switch open: Ic] = 0.

{ ~0 > - & ;

switch closed: ILl = [0]; IE] = +.

Context of switch failure: Aswitch = {[L] = +, ~c = 0}

In this particular case, the cooperation layer is naturally included in both agents' models, thanks to the formalisms that are used: as far as F is concerned, the receptivities associated to the transitions of its grafcet model (i.e. the expression of events and conditions) are logical functions that depend on the behavioural variables of the device. Conversely, the relevant specifications of the qualitative equations of the model of l? are given by the markings of the grafcets implementing the model of F. In fact, only the current state of each agent is available for the other, and not the whole of their model (Hunt and Price, 1991).

334 C. TESSIER AND L. CHAUDRON

symptom

marks stages

corLTistently

Behavioural j

conlexl

proposes [[21 . .~ / -- and A fuse /

=i,,/ ( ~ ~ " proposes Apump

~ updates receptivit~es

approves

~ ~ (apeates ~ings)

- proposes[12], A fuse and Apamp

approves ~ approves, "~ cannot more do

refines ~ , ~ (instanciates variables) ~ new common diagnosis:

pump fault & fuse fault

end interaction

Figure 9. The screen-wash diagnosis Petri net.

Fault detection is performed at the functional level (by the user of the device); hence, the functional agent F first tries to reach its goal, i.e. to mark its stages consistently, starting from the initial marking (symptoms). With symptoms water not being squirted at screen despite switch being closed, and warning lamp on, the possible consistent markings all correspond to a switch-closed context and a unique fault, which is the fault of the fuse (see Figure 9).

The behavioural agent/3 then knows which the specifications it can work on are: switch closed ([I2]) and context of fuse failure (Afuse). It can in turn try to reach its own goal, which is to instantiate consistently its variables, i.e. to solve the system [I2]UAfuse. This leads to Apump becoming verified.

Consequently, F has to update the truth values of its receptivities, which is consistent with its previous state, and then to update its marking: its resulting state corresponds to a switch-closed context and a double fault, pump fault and fuse fault.

t3 cannot do more but approve this result and the process ends on that result.

CONSTRUCTIVE DIFFERENCE AND DISAGREEMENT 3 3 5

R E M A R K . Bo th agen t s need the o the r one to de l ive r a co r rec t resul t ; p u m p faul t

and fuse faul t c o u l d no t be l i nked at the func t iona l l eve l and the fact that the con tex t

o f fuse faul t i n c l u d e d the p u m p faul t c o u l d no t be f o u n d but at the b e h a v i o u r a l level .

4. Conclusion

What we have described in this paper is a particular cooperation among rational different agents based on a non-additive cooperation gain. With reference to the typology we have proposed, this supra cooperation can be further characterized by the fact that each agent needs at least one of the others so that a correct result should be created. In that sense, supra is more dedicated to non-routine problem solving for which "extra-ordinary" solutions are required.

What we intended to show was that the constructive design of conflict in coop- eration is intellectually valid. We are now going on with the design of tools for implementing those concepts: Petri nets for the interaction model, logical lattices for the cognitive model of the SuprA-agent. This work is achieved in the framework of distributed systems for perception and information fusion.

References

Balacheff, N. (1987): Proof Process and Validation Situation, Educational Studies in Mathematics, vol. 18, pp. 147-176.

Brandt, R-Y. (1990): Stratdgie de contre argumentation et de logique - Proceedings of the 4th ARC Symposium, Paris, March 1990.

Chaudron, L. (1990): Continuous Inference - Proceedings of the 4th ARC Symposium, Paris, March 1990.

Chaudron, L. and C. Tessier (1995): SuprA-cooperation: When Difference and Disagreement Are Con- structive - Proceedings International Workshop on the Design of Cooperative Systems, Antibes, France, January 1995.

Chittaro, L, G. Guida, C. Tasso, and E. Toppano (1993): Functional and Teleological Knowledge in the Multimodeling Approach for Reasoning about Physical Systems: A Case Study in Diagnosis, IEEE Transactions on SMC, vol. 23.6, December 1993.

Cuppens, R. and M. Carral (1993): Les tribulations d'un pentag6ne (ou comment mener une recherche avec Cabri-G~om~tre), Revue RepOre, no. 12, juillet 1993, pp. 51-73.

David, R. and H. Alla (1989): Du Grafcet au rdseau de Petri. Paris: Hermes. Davis, Ph. and R. Hersch (1982): The Mathematical Experience. Boston: Birkhauser. Decker, K. et al. (1988): Evaluating Research in Cooperative Distributed Problem Solving. In Read-

ings in DistributedAI, A. Bond and L. Gasser (eds.). Los Angeles: Morgan Kaufmann, pp. 485- 519.

Duffaut, O. (1994): Probl6matique multi-modUle pour la gdndration d'arbres de tests: application aux syst~mes de commande automobiles. Th~se de doctorat Ensae.

Erceau, J., L. Chaudron, J. Ferber, and T. Bouron (1994): Syst~mes Personne(s)-Machine(s): patri- moines cognitifs distribuds et mondes multi-agents, coop6ration et prises de d6cision collectives. In SystOmes coopdratifs: de la moddlisation gz la conception. Toulouse: Octares Ed.

Fiorino, H. and C. Tessier (1994): A Functional and a Behavioural Models Working Together to Diagnose Failures More Accurately. In Proceedings IA'94, Paris, France, May 1994, pp. 301- 310.

Gains, M., N. Karba, M. Drobnic, and V. Krizman (1993): Combined Systems Offer Better Perfor- mance. In Proceedings Avignon'93 Scientific Conference, Avignon, France, May 1993, pp. 221- 230.

336 C. TESSIER AND L. CHAUDRON

Grice0 H.E (1975): Logic and Conversation. In Syntax and Semantic 3: Speech Acts, P. Cole and J. Morgan (eds.). New York: Academic Press, pp. 41-58.

Hunt, J. and C. Price (1991): Diagnosis of Electromechanical Subsystems Using Multiple Models. In Second International Workshop on Principles of Diagnosis, Milano, October 1991.

de Kleer, J. and J.S. Brown (1984): A Qualitative Physics Based on Confluences.Artificial Intelligence, vol. 24, pp. 7-83.

Laborde, C. and B. Capponi (1994): Cabri-gtomttre constituant d'un milieu pour l'apprentissage de la notion de figure gtomttrique. In Recherches en didactique des mathdmatiques, vol. 14/1.2. La penste sauvage Ed, pp. 245-256.

Lakatos, Imre (1976): Proofs and Refutations. Cambridge: Cambridge University Press. Lakatos, Irrtre (1986): A Renaissance of Empiricism in the Philosophy of Mathematics. In New

Directions in the Philosophy of Mathematics, T. Tymoczko (ed.). Birkh/iuser, pp. 29-48. Lander, S. and V. Lesser (1993): Understanding the Role of Negotiation in Distributed Search among

Heterogeneous Agents. In Proceedings IJCAI'93, pp. 438-444. Leftvre, V. and Y. Pollet (1993): BBI: un syst~me multi-agent d'aide ~ la photo-interprttation. In

Actes Avignon'93, Confdrence Scientifique, Avignon, France, May 1993, pp. 473-482. Meilhac, C. (1993): Etude d'une pragmatique du dessin gtomttrique. Mtmoire de DEA Sciences

cognitives. Crea Ecole Polytechnique, 1993. Oliviera, E., E Monta, and A.R Roeha (1993): Cooperation in a Multi-Agent Community. In Pro-

ceedings Avignon'93 Scientific Conference, Avignon, France, May 1993, pp. 601-610. Perret, Ch. (1988): Structuration des connaissances et raisonnement ~t l 'aide d'objets. Rapport de

recherche Inria no. 847, May 1988. Sallantin, J., J.-J. Szczeciniarz, C. Barboux, M.-S. Lagrange, and M. Renaud (1991a): Semi Empirical

Theory: Conception and Illustration, Revue d'Intelligence Artificielle, vol. 5, no. 1, pp. 9-67. Sallantin, J., Quinqueton, J., Barboux, C., and J.-P. Aubert (1991b): Semi Empirical Theory: Elements

of Formalization, Revue d'Intelligence Artificielle, vol. 5, no. 1, pp. 69-92. Sticklen, J. and B. Chandrasekaran (1990): Integrating Classification Based Reasoning with Function

Based Deep Level Reasoning. In Causal AI Models, Werner Horn (ed.). Hemisphere Publishing Corporation.

Tessier-Badie, C. and Ch. Castel (1993): A Grafcet Based Model for Diagnosis. In Proceedings Avignon'93 Scientific Conference, Avignon, France, May 1993, pp. 343-353.

Tymoczko, T. (1986): New Directions in the Philosophy of Mathematics. Birkh~iuser. Varela, E (1979): Principles of Biological Autonomy. New York: Elsevier/North Holland Publishers.