12
Future Generation Computer Systems 17 (2000) 15–26 Towards a visualization of arguing agents Michael Schroeder * Department of Computing, City University, London EC1V 0HB, UK Abstract In this paper, we show how to visualize arguing agents. We describe a visualization of single-agent and multi-agent argumentation. In the former, we show how to visualize the argumentation process and space within one agent, in the latter we develop an animation of communicating agents and map the relations between agents to spatial distance. We discuss the advantages/disadvantages of our solutions and compare it to alternatives. © 2000 Elsevier Science B.V. All rights reserved. Keywords: VRML; Visualization; Agents; Multi-agent systems; Argumentation 1. Introduction Although argumentation is used in everyday life, it is nonetheless an art. Its formal treatment is often left to mathematicians and philosophers and the average layman is deterred by the need to know the founda- tions of logic to fully understand argumentation. In this paper, we describe a more intuitive approach to argumentation, which is based on its visualization. So, how can we visualize argumentation? There are two main aspects to the problem: 1. How to visualize reasoning internal to an agent? 2. How to visualize argumentation between several agents? For visualization in general, we can distinguish two approaches: extrinsic and intrinsic [1]. In the former, we map agent properties to size, shape, texture, and color; in the latter, we map inter-agent relations to spatial distance. Consider, for example, the picture in Fig. 1 showing a courtroom with the accused guarded by military police. The extrinsic approach allows us to identify two type of people in this picture by their * Tel.: +44-171-477-8918; fax: +44-171-477-8587. E-mail address: [email protected] (M. Schroeder). clothing: police and civilians. The intrinsic approach allows us to identify a crowd of people — the audi- ence — and the accused framed by the police. With- out any word spoken, the visualization reveals already valuable information to understand the argumentation about to take place. Thus, we aim to use intrinsic and extrinsic visualization for agent-internal and inter-agent ar- gumentation. There has been some related work on visualization of logic programs [3,7,16]. It focuses mainly on the visualization of proof trees, in general, and the control flow in an AND/OR tree by displaying success or failure of rules and the associated unifica- tion process. While these approaches are well suited for debugging or analyzing the execution of a logic program they are static and do not allow the user to navigate in a argumentation process and space, which is the main objective of Plewe, Raab, and Schroeder’s Ultima Ratio project [22], which creates a 3D world with an animation of the argumentation process and space. Such a visualization is by no means straight- forward and canonical. Goodman [12] argues that there is no fixed meaning for pictures and therefore any visual language for argumentation is arbitrary; even though it should be, of course, in itself consis- 0167-739X/00/$ – see front matter © 2000 Elsevier Science B.V. All rights reserved. PII:S0167-739X(99)00101-6

Towards a visualization of arguing agents

Embed Size (px)

Citation preview

Future Generation Computer Systems 17 (2000) 15–26

Towards a visualization of arguing agents

Michael Schroeder∗Department of Computing, City University, London EC1V 0HB, UK

Abstract

In this paper, we show how to visualize arguing agents. We describe a visualization of single-agent and multi-agentargumentation. In the former, we show how to visualize the argumentation process and space within one agent, in the latterwe develop an animation of communicating agents and map the relations between agents to spatial distance. We discuss theadvantages/disadvantages of our solutions and compare it to alternatives. © 2000 Elsevier Science B.V. All rights reserved.

Keywords:VRML; Visualization; Agents; Multi-agent systems; Argumentation

1. Introduction

Although argumentation is used in everyday life, itis nonetheless an art. Its formal treatment is often leftto mathematicians and philosophers and the averagelayman is deterred by the need to know the founda-tions of logic to fully understand argumentation. Inthis paper, we describe a more intuitive approach toargumentation, which is based on its visualization.

So, how can we visualize argumentation? There aretwo main aspects to the problem:1. How to visualize reasoning internal to an agent?2. How to visualize argumentation between several

agents?For visualization in general, we can distinguish two

approaches: extrinsic and intrinsic [1]. In the former,we map agent properties to size, shape, texture, andcolor; in the latter, we map inter-agent relations tospatial distance. Consider, for example, the picture inFig. 1 showing a courtroom with the accused guardedby military police. The extrinsic approach allows usto identify two type of people in this picture by their

∗ Tel.: +44-171-477-8918; fax:+44-171-477-8587.E-mail address:[email protected] (M. Schroeder).

clothing: police and civilians. The intrinsic approachallows us to identify a crowd of people — the audi-ence — and the accused framed by the police. With-out any word spoken, the visualization reveals alreadyvaluable information to understand the argumentationabout to take place.

Thus, we aim to use intrinsic and extrinsicvisualization for agent-internal and inter-agent ar-gumentation. There has been some related work onvisualization of logic programs [3,7,16]. It focusesmainly on the visualization of proof trees, in general,and the control flow in an AND/OR tree by displayingsuccess or failure of rules and the associated unifica-tion process. While these approaches are well suitedfor debugging or analyzing the execution of a logicprogram they are static and do not allow the user tonavigate in a argumentation process and space, whichis the main objective of Plewe, Raab, and Schroeder’sUltima Ratio project [22], which creates a 3D worldwith an animation of the argumentation process andspace. Such a visualization is by no means straight-forward and canonical. Goodman [12] argues thatthere is no fixed meaning for pictures and thereforeany visual language for argumentation is arbitrary;even though it should be, of course, in itself consis-

0167-739X/00/$ – see front matter © 2000 Elsevier Science B.V. All rights reserved.PII: S0167-739X(99)00101-6

16 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

Fig. 1. Intrinsic and extrinsic visualization.

tent and intuitive. Section 3 describes Ultima Ratio’svisual language for agent-internal argumentation anddiscusses its intuitiveness.

The next step is to consider multiple agents. Therehas been some work developing extrinsic approachesfor believable interface agents [2,13,15,19,24] and an-other intrinsic approach [26]. We will go beyond theseby combining intrinsic and extrinsic approaches in avisualization of avatars, representing the agents, en-gaged in a conversation with their spatial distance re-flecting their relation.

The paper is organized as follows. First, we reviewthe formal framework of argumentation underlying ourwork. In Section 3, we describe single-agent argumen-tation, in Section 4, the extrinsic approach for multi-ple agents, and in Section 5 the intrinsic approach formultiple agents.

2. Formalization of argumentation

Since Leibniz’s 1679calculus raciocinator, re-searchers have been investigating how to automateargumentation. A problem is that many figures ofarguments cannot be described formally. The Ency-clopedia Britanica lists for example the followingfigures:1. Semantical figures:• arguing by example,• authority,• or analogy.

2. Syntactical figures:• arguing from the consequences,• a pari (arguing from similar propositions),• a fortiori (arguing from an accepted conclusion

to an even more evident one),• a contrario (arguing from an accepted conclusion

to the rejection of its contrary),• undercut (attacking premises),• or rebut (attacking conclusions).

The syntactical figures can be formally described bytheir form, i.e. syntax, and can therefore be easilyautomated. Although semantical figures such as ar-guing by authority may be formalized for particu-lar domains, they are, in general, not formalizable.However, undercut and rebut are already sufficientto define the semantics of extended logic programs[8,9,17]. An extended logic programs is defined asfollows.

Definition 2.1 (Extended logic program). An ex-tended logic program is a (possibly infinite) set of rulesof the formL0 ← L1, . . . , Lm, notLm+1, . . . , notLn

(0 ≤ m ≤ n), where eachLi is an objective literal(0 ≤ i ≤ n). An objective literal is either an atom Aor its explicit negation¬A. Literals of the formnotLare called default literals. Literals are either objectiveor default ones.

We will use extended logic programs to model ar-guments. As an example, consider the excerpt of thethird scene in the third act of Shakespeare’s Hamletbelow.

Hamlet. [approaches the entry to the lobby] Nowmight I do it pat, now a’is a-praying-

And now I’ll do’t, [he draws his sword] and so a’goes to heaven,

And so am I revenged. That would be scanned:A villain kills my father, and for thatI his sole son do this same villain sendTo heaven. . .Why, this is bait and salary, not revenge.Hamlet is caught in a conflict. On the one hand

he wants to revenge for his father being murdered.On the other he knows that having revenge by killingClaudius, the murderer, is not possible since Claudiusis praying at that very moment and would go to heavenwhich contradicts the goal of having revenge. The textcan be formalized as follows:

M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26 17

(a) praying(claudius).(b) in heaven(X)← kills(Y, X), praying(X).(c) took revengeon(X, Y )← kills(X, Y ).(d) killed(claudius, king).(e)¬took revengeon(X, Y )← in heaven(Y ).(f) goal revenge(X, Y )← close(X, Z),

killed(Y, Z), notjustified(killed(Y, Z)).(g) close(hamlet, king).(h) ⊥← goal revenge(X, Y ), not tookrevenge

on(X,Y).(i) revisable(kills(hamlet, claudius), false).

In line 1, Hamlet realizes that Claudius is praying. Thisis represented as a fact (a). In line 2, Hamlet continuesthat Claudius would go to heaven if killed while pray-ing. Formally, this is an instantiation of the generalrule (b). In line 3, Hamlet states that killing Claudiussatisfies Hamlet’s desire for revenge; or more general(c). In line 4, Hamlet starts another line of thinking bymentioning the fact that Claudius killed Hamlet’s fa-ther, the king (d). In line 7, Hamlet finds that he doesnot have revenge if he sends Claudius to heaven (e).Besides this direct translation of Hamlet’s monologueto logic, we have to add further facts and rules whichare mentioned throughout the scenes before or whichare given implicitly. First of all, we need a rule to ex-press when someone wants to revenge (f). That is,X

wants to take revenge onY if Y killed a personZbeing close toX, and the killing is not justified. Leftimplicitly in the piece is the fact that Hamlet and hisfather are close to each other (g). To specify conflict-ing goals we use besides facts and rules integrity con-straints whose head is denoted by the bottom symbol⊥. In this scene, we state formally that it is contradic-tory to want to take revenge and not have it (h). Finally,we have to specify the assumptions Hamlet is willingto change to solve conflicts. For this scene, Hamletadopts the default assumption of not killing Claudius.That is, (i) states that Hamlet killing Claudius is as-sumed false initially, but may be changed in the courseof argumentation.

To formalize human argumentation as shown inthe previous example one has to detect first the as-sumptions the protagonist is willing to change. Theseassumptions are made revisable and are assigned toa default value. Secondly, the problem domain hasto be modeled in terms of facts and rules. The twonegations (not and ¬) are important for this mod-eling task. For example,not justified(killed(X, Y ))

expresses that a murder is not justified as long asthere is no explicit proof for the contrary. In con-trast, ¬ took revengeon(X, Y ) ← in heaven(Y )

states that there is explicit evidence thatX did nottake revenge onY if Y ends up in heaven. Besidesthe three ingredients of revisable assumptions, facts,and rules, we have to define which conclusionsare contradictory. Naturally, we say that, for exam-ple, took revengeon(X, Y ) and its explicit negation¬ took revengeon(X, Y ) are contradictory, i.e.⊥←took revengeon(X, Y ),¬ took revengeon(X, Y ), butfor convenience we are at liberty to define further con-flicts such as, for example,⊥← goal revenge(X, Y ),

not tookrevengeon(X, Y ).The following definitions for argumentation are

based on [8,17].

Definition 2.2 (Argument). LetP be an extendedlogic program. An argument for a conclusionL is afinite sequenceA = [rm, . . . , rn] of ground instancesof rulesri ∈ P such that:1. for everyn ≤ i ≤ m, for every objective literalLj

in the antecedent ofri there is ak < i such thatLj is the consequent ofrk,

2. L is the consequent of some rule ofA,3. no two distinct rules in the sequence have the same

consequent.A sequence of a subset of rules inA being an argumentis called subargument.

Example 2.1. The sequence below is an argument forthe conclusiongoal revenge(hamlet, claudius)

goal revenge(hamlet, claudius)←close(hamlet, king),killed(claudius, king),not justified(killed(claudius, king));

close(hamlet, king)← true;killed(claudius, king)← true;

To detect conflicting arguments and to resolve con-flicts by changing revisable assumptions, we use an ex-tension of the REVISE system [6]. We augmented thederivation procedure and contradiction removal algo-rithm of the original system by proof trace generation.These proof traces contain tags indicating whether aliteral is an assumption that may be revised, whetherit is a fact, or which rule was applied, etc. Formally,a trace is defined as follows.

18 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

Definition 2.3 (Trace). The traces are composed ofthe following speech–act-like tags [23]:1. Derivation of default negated literalsnotL is ac-

companied with the tagsproposenot(L) enteringthe definition ofnotLandacceptnot(L) if the prooffor L fails andreject not(L) otherwise.

2. If a revisableL is encountered during the proofthe tagsrevisableassumed(L) and revisablenotassumed(L) are generated depending on the currenttruth value ofL.

3. If a literal L is part of a loop, the tagloop(L) isgenerated.

4. If L is a fact, thenfact(L) is generated.5. If the proof involves rules for agoalL, then

rule(L, Body) indicates the rule used in the proofandno rule(L) that there is no rule. If a rule’s bodyis provenrule succeeds(L, Body) is generated andotherwiserule fails(L, Body).

6. If a partial revisionR is assumed thenassume(R)

is generated and ifR turns out to be a solutionassumesolution(R) otherwiseassumeclosed(R).

3. Visualizing single agent argumentation

Given the above formalization of argumentation, wecan now turn to its visualization.

3.1. Visualizing the dynamic argumentation process

Visualization of proof trees, in general, has pri-marily focused on visualizing the control flow in anAND/OR tree by displaying success or failure of rulesand the associated unification process [3,7,16]. Whilethese approaches are well suited for debugging or an-alyzing the execution of a logic program they are notvery visually pleasing and require knowledge aboutthe execution of logic programs. Also, the detectionand removal of contradictions, a central point of ex-tended logic programs, cannot be visualized easily bythe techniques used so far.

In contrast, we aim as part of the Ultima Ratioproject [22] at visualizing the process of argumen-tation in a form understandable by humans evenwithout basic knowledge about the foundations offormal logics. To achieve this goal, we use the dy-namic construction of proof trees as metaphor for theprocess of argumentation. The tree construction is a

representation of an argument (the logic) and the ex-ecution event (the control flow) in the logic program.While visualizing the current state of execution of thelogic program has been done by tree structures, ourapproach mainly differs in the dynamics of the pre-sentation. Rather than only changing attributes of thevisualization (such as color of an argument) in a moreor less static view, we generate an interactive anima-tion of the whole process. The goal of this animationis to enable users to navigate through the argumenta-tion spaceas well as the argumentationprocess.

The dynamics of an argumentation process is visu-alized using a forest of trees for the different argumen-tation processes based on different assumptions andcolored nodes representing arguments’ conclusions.The tags in Definition 2.3 are visualized as follows:• The tagassumeindicates the generation of a new

argumentation tree.• Once the tree is fully processedassumeclosedand

assumesolution indicate a derivation of a conflictand its contrary, respectively. This event is repre-sented by two rotating, orange ellipses. If a conflictis derived (assumeclosed) the ellipses intersect,otherwise they do not.• The structure of the tree is determined by the tags

rule creating internal nodes in the tree andfact cre-ating the leaves of the argumentation tree. Initially,the tree is constructed rapidly moving from the rootto the leaves. All nodes are left open and rotate in-dicating that it is not yet known whether they arevalid or not.• In the next phase, the tree is traversed bottom-up. The

current, rotating, open node is filled by a yellowor blue node, which moves to the open node.The filling color is yellow if the node forms avalid conclusion (rule succeeds) or blue otherwise(rule fails, no rule).Interestingly, users favored the visualization com-

bining a fast top-down traversal followed by a slowand detailed bottom-up traversal to pure top-down orbottom-up. This is very remarkable since this corre-sponds technically exactly to the magic-set algorithmused in deductive databases. This algorithm combinesthe advantages of top-down and bottom-up evaluationto improve performance and appears to capture closelyhuman reasoning.

The argumentation process is visualized as a 3Dtree, which allows the user to navigate in the argu-

M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26 19

Fig. 2. Ultima Ratio’s [22] visualization of an argumentation process. Should Hamlet take revenge on Claudius who is praying?

mentation space — unguided or using an auto-pilot.In the implementation of Ultima Ratio [22], whichhas been exhibited at Ars Electronica 1998 and at theCanon Art Lab, Tokyo, we adorned nodes with suit-able video sequences, where possible. As an example,consider Hamlet. Fig. 2 shows a screen shot of UltimaRatio where Claudius is praying. In the foreground theorange node labeledpraying(claudius) indicates the

Fig. 3. Crossovers — tracing Motifs: a 2D dependency graph of all arguments. The advantage of the 3D representation in Fig. 4 is obvious.

successful proof of this fact. The background showsthe rest of the proof tree.

3.2. Visualizing the static argumentation space

Besides the dynamic presentation of the argumen-tation process, it is of value for the user to navigatethrough the static space of all possible arguments. This

20 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

Fig. 4. Crossovers — tracing Motifs: Hamlet’s rules for revengeare also part of Krimhild and Etzel, Agents of the Nibelungensaga.Clouds, such as Duchamp, Hamlet, Don Juan, etc. are connectedif they contain the same arguments.

space does not only contain a single piece, but a va-riety of pieces each being a cluster of its arguments.In the Ultima Ratio project, the argumentation spacecomprises, for example, Shakespeare’s Hamlet, Ilsaand Rick in Casablanca, Siegfried, Krimhild, Brun-hild, Hagen and Etzel of the German saga Nibelungen-lied, Euripides’ Medea, Molière’s Don Juan, the artistDuchamp and his readymades, Robocop, and Mac-chiavelli. As it turns out, many arguments are funda-mental and occur therefore in many pieces. Hamlet’srevenge argument occurs, for instance, also in the Ni-belungenlied. Similarly, the topic of offence connectsDon Juan to Medea. Initially, we visualized these de-pendencies between arguments in 2D. Fig. 3 clearlyshows the limitations of this effort. The structure istoo complex to fit on a screen. Furthermore, it is notvisible which arguments belong to which pieces andoccur in several other pieces. In the 3D visualizationimplemented in VRML, one clearly sees the clustersand how they are connected. Every cluster is addition-ally covered by a semi-transparent cloud, which con-tains the name of the piece. The arguments occurringin several pieces are connected by a different arc thanthose arguments within a cluster. Fig. 4 shows threeclusters and arcs connecting clusters.

4. Visualizing arguing agents

The previous section explains how to visualize anargumentation process internal to a single agent; inthis section we consider multiple agents, which argueand cooperate with each other. These two types of

interaction — argumentation and cooperation — arefundamental. An agent, which does not know anythingabout a certain literal, cooperates with others whichhelp out and possibly provide the knowledge. As forargumentation, an agent believes in something and ar-gues with other agents to determine whether this beliefis valid or has to be revised. When arguing, we candistinguish skeptical and credulous agents. The for-mer is more critical towards its own beliefs than thelatter. Technically, skeptical agents accept undercutsand rebuts to their own arguments, whereas credulousagents accept only undercuts. For argumentation, theagents use the speech acts [23]propose, oppose, andagreeand for cooperationaskandreply.

To reduce the number of messages exchanged anagent will ask only its cooperation partners for helpand only those whose domain of expertise covers theissue in question. Similarly, an agent proposes its be-liefs only to its argumentation partners with the cor-responding domain of expertise. All in all, an agent isdefined by1. an avatar to represent the agent,2. a set of arguments,3. a set of predicate names defining the agent’s do-

main of expertise,4. a flag indication whether the agent is credulous or

skeptical,5. a set of cooperation partners,6. a set of argumentation partners.

Fig. 5 shows a form to specify such an agent. Theuser can query an agent about its beliefs. The queryresults in a conversation among the agents to establishthe answer to the query. The main idea of the visual-ization of multi-agent argumentation is to represent theagents by avatars and the conversation by animatingthe message exchange between the avatars. To followthe conversation the user may choose two options: ei-ther he/she takes a static, fixed viewpoint in the spaceor a dynamic one which follows the current messageas it flies from sender to receiver. Since the messagesare part of different speech acts, we may code this bycolor or shape of the message. In a broader context, amessage is sent only at a certain stage of the overallargumentation protocol. Therefore, we can visualizethe state of the negotiation within the avatar. Again,we can simply code the state into the avatars color ormore sophisticated into its facial expression. Chernofffaces [4] (see Fig. 6), which were invented in the 1970s

M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26 21

Fig. 5. An HTML form to define agents.

to visualize multi-variate data, are an early approachinto this direction. Although they are not accurate incapturing exact values of variables, they are useful tocapture a limited number of qualitative values. Using

Fig. 6. A set of Chernoff faces [4].

Fig. 7. A screenshot of an animated argumentation trace for mul-tiple agents.

facial expression is also studied extensively for be-lievable agents [2,13,24]. Other agent properties, suchas the number of messages exchanged or its credu-lousness, can be reflected in the size and color of theagent.

In our current implementation, which is part ofACA, the arguing and cooperating agents framework[20], the user selects an avatar to represent the agent(see Fig. 5). The conversation is then animated as de-scribed above. Fig. 7 shows a screen shot of such aconversation for a BT business process to provide cus-tomer quotes [20].

5. Visualizing agent relations

In the previous section, we focused on visualizingthe conversation of multiple agents using the extrinsicapproach coding agent and conversation properties toavatars, color, shape, and size. This section is devotedto the intrinsic approach where we show how to mapthe relation of agents to spatial dimensions.

5.1. A distance metric for communicating agents

Those agents heavily engaged in a conversationshould be close to each other, whereas agents not talk-ing to each other should be far away from each other.In [21], we define metrics so that the agents’ charac-teristics are mapped to a distance. It is crucial that themapping leads to a mathematically well-defined dis-tance such that agents have distance 0 to themselvesand that the distance is positive otherwise. Further-more, the distance has to be symmetric and satisfy thetriangle inequality which states that the direct distancebetween two agents is the shortest.

22 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

Definition 5.1 (Distance table). A matrixD = (dij ) ∈Rn,n is a distance table1. if it is non-negative, i.e.dij > 0 if i 6= j anddii =

0, and2. if it is symmetric, i.e.dij = dji , and3. if it satisfies the triangle inequality, i.e.dij ≤ dik +

dkj.

Given the messages exchanged between agents, wewant to compute a distance between them. We wantthe distance function to be decreasing, i.e. the more themessages exchanged, the closer; the less, the farther.The following metric serves this purpose.

Definition 5.2 (Distance of communicating agents).Let mij ≥ 0 denote the number of messagesexchangedbetweeni and j for i 6= j . For i = j , let mii bethe overall number of messages sent and received byagenti, i.e. mii =

∑nj=1,j 6=imij .

Then the distance of communicating agents is de-fined as

dij ={

mii +mjj −mij if i 6= j,

0 else.

The above definition is actually a distance.

Theorem 5.1. The distances of communicating agentsare a distance table.

Proof. The distance of communicating agents satis-fies the first two properties of a distance table sinceby definition the number of messages exchanged isnon-negative,mii ≥ mij , and it is symmetric, respec-tively. To prove the triangle inequality,dij ≤ dik+ dkj,let us assume without loss of generalityi 6= j , k 6=i, j . Frommij ≥ 0 andmkk ≥ mik, mkk ≥ mkj we con-cludemij ≥ mik +mkj − 2mkk which is equivalent tomii +mjj −mij ≤ mii +mkk−mik +mkk+mjj −mkjand thusdij ≤ dik + dkj. �

5.2. A methodology to design distance metrics

Once we come up with a definitiondij of distance wecan derive a variety of distance functions by applyingany monotonic, sub-additive function which is positiveinR+ and vanishes at zero. A functionf is monotonicif x ≤ y implies f (x) ≤ f (y); it is sub-additive if itsatisfiesf (x + y) ≤ f (x)+ f (y).

Theorem 5.2. Let f : R+ → R+ be a monotonic,sub-additive function such thatf (0) = 0 and (dij ) adistance table. Then(f (dij )) is a distance table.

Proof. By Definition 5.1 we have to show that(f (dij )) is positive, symmetric and satisfies thetriangle inequality. The first property holds sincef (dii ) = f (0) = 0 by definition of dii and f. Wehavef (dij ) > 0 sincedij > 0 andf’s domain isR+.Second, symmetry,f (dij ) = f (dji ), is given sincedij = dji . Third, to prove the triangle inequality weapply f to dij ≤ dik + dkj and get by definition off and its monotony and sub-additivenessf (dij ) ≤f (dik + dkj) ≤ f (dik)+ f (dkj). �

The requirement of sub-additiveness in the abovetheorem can be guaranteed if one considers concaveand monotonic functions. Concaveness is easy tocheck since by definition a twice differentiable func-tion f is concave iff ′′ ≤ 0. In particular, scaling bya logarithm is very useful if clusters are too crowdedsince the logarithm increases the distance of closenodes more than it does for nodes far away from eachother.

Example 5.1. Let dij be the distance of communicat-ing agents, then applying logarithm still yields a validdistance definition, i.e. log(dij +1) is a distance table.

5.3. From distances to points in space

So, given the distance tableD = (dij ) ∈ Rn,n wewant to position the agents. Using Euclidean distanceas defined below, we can formulate our problem for-mally.

Definition 5.3 (Euclidean distance‖ · ‖2). Let v, w ∈Rm then‖v, w‖2 =

√∑mh=1(vh − wh)2 is called (Eu-

clidean) distance.

Problem. We have to define an algorithm which com-putes a matrixX = (x1, . . . , xn) ∈ Rm,n for a dis-tance tableD = (dij ) ∈ Rn,n such that‖xi, xj‖ = dij ,i.e. the distance betweenxi andxj is dij .

In [21], we experimented with two approaches tosolve this problem. The first one is based on matrixtheory and singular value decomposition [11]. It comes

M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26 23

Fig. 8. Constructing a geometrical solution using a compass.

up with an exact solution if such a solution existsand otherwise with an approximation. The second ap-proach is based on spring embedding [18], a heuristicwhich cannot guarantee to find the best solution butwhich is dynamic and flexible.

Intuitively, it is easy to construct a geometric solu-tion. Consider Fig. 8. We place a compass at a randomposition on the paper and start drawing circles withradii d1i . Then we position the compass somewhereon circled12 and start drawing circles with radiid2i .For the third point we position the compass on the in-tersection ofd13 andd23 and draw all circles with radiid3i , and so on. This method determines if there is a so-lution to the problem and constructs it. If we abstractfrom this geometric construction, it turns out that weactually solve quadratic equations. For example, thethird point in the figure is the intersection(x, y) ofthe two circlesd13 andd23. This means we solve thesystem of quadratic equations:

(x − x01)2+ (y − y01)

2

= d213, (x − x02)

2+ (y − y02)2 = d2

23.

In [21], we generalized this approach and devel-oped an algorithm based on singular value decom-position [11] to solve such equations. The algorithmhas the property that it finds always an exact solu-tion provided the input is a distance table. However,this solution may be higher dimensional; it has theproperty that the lower the dimension the more itcontributes to the distance we try to achieve. There-fore using the first three dimensions yields the bestapproximation. There are methods to visualize suchhigher-dimensional solutions, such as starplots [10], as

Fig. 9. A starplot [10].

shown in Fig. 9 for example. They are, however, unin-tuitive and difficult to perceive. An alternative is to usemore than the three spatial dimension to visualize thehigher-dimensional solution by mapping the fourth,fifth, and sixth dimension to the colors red, yellow, andblue.

The second approach described in [21] is springembedding [18]. For the layout of the agents with theright distances it uses the physical metaphor of springsconnecting the agents and leading to attractive forcesbased on the desired distance between two agents. Ad-ditionally, there are repulsive forces between any pairof agents. The algorithm starts with a random layoutand then computes for a number of iterations the forcesfor each agent and moves it a bit into the direction ofthe overall force.

An advantage of spring embedding is that it is a flex-ible any-time algorithm which works incrementallyand thus allows to add new agents into a generated so-lution. This nice feature is not available in the singularvalue decomposition algorithm, which could, however,be added by using the power method to compute eigen-values incrementally. Besides spring embedding’s ef-ficiency and flexibility, it has two drawbacks: it is notpossible to assess whether the overall optimal solutionhas been computed and it is difficult to tune becausethe modeling of the forces is critical for the quality ofthe solution computed.

24 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

Both algorithms work still fine if some of theproperties for distance tables are violated. For exam-ple, can the singular value decomposition algorithmcope with asymmetric distances, where it takes avalue in between. Both algorithms also admit neg-ative distances, which are either taken as 0 (springembedding) or as their absolute value (SVD). In casethe triangle inequality is violated, both algorithms“repair” the violation, spring embedding with lesserror.

5.4. Examples

Fig. 10 shows an example of communicating agentspart of a BT business process to provide a customerquote [14,20,25]. In the figure, most agents stick to-gether but there is one in the distance which did notparticipate in the negotiations.

For this particular application, spring embeddingturned out to be a very useful technique to visualize theclustering of agents over time. Rather than consideringthe overall number of messages exchanged, the springembedding algorithm, being an any-time algorithm,allows to dynamically visualize the relations of agentsby decreasing the distance of a sender and a receiverand increasing the distance of all other pairs of agents[5].

Another experiment for agent communities visual-izes personal agents with possibly shared interests.

Fig. 10. A snapshot of the communicative agents. Bright spheresare generated by the matrix theory algorithm, dark ones by springembedding.

Fig. 11. A snapshot of the personal agents. Bright spheres aregenerated by the matrix theory algorithm, dark ones by springembedding.

From a data set of 16 000 accessed web pages, wedefined a matrix of shared interests and hence a dis-tance matrix. The snapshot in Fig. 11 shows a top viewof some 20 agents. The bright and dark spheres arethe solution based on matrix theory and spring em-bedding, respectively. The right lower arm of the starof bright spheres is for example a group of students.Both algorithms come up with different layouts whichis for one reason due to their solutions being approx-imations. The matrix theory algorithm tends to mapclose agents onto each other since their most signifi-cant coordinates are the same, the spring embeddingalgorithm takes care that no agents collide. While thisis often desirable, it can prevent clustering of agentswhere necessary as visible in Fig. 11.

6. Conclusion

Visualization of argumentation is a promising ap-proach to make complex reasoning of single and mul-tiple agents more intuitively understandable withoutknowledge of the foundations of logic. Although therehas been some work on visualization of logic programs[3,7,16] these approaches do not use the full power ofvisualization by dynamically constructing proof treesand using both extrinsic and intrinsic [1] approaches.In contrast, Ultima Ratio [22], which is described inSection 3, includes these aspects into its visualizationof argumentation.

M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26 25

It turned out that users prefer a combination oftop-down and bottom-up evaluation, which corre-sponds to the magic set technique used in deductivedatabases. Furthermore, we showed that the visual-ization of arguments appearing in different contextscan be visualized very well in a 3D argumentationspace, where arguments are clustered according totheir pieces and where bridges between them showcommon patterns. The clusters were additionally vi-sualized using semi-transparent clouds to emphasizethe hierarchical nature.

Concerning the visualization of multi-agent argu-mentation, an animated trace of the argumentationprocess was introduced in Section 4. It adds value ifboth intrinsic and extrinsic visualization approachesare used. We achieved the latter through the use ofavatars, which can code some of the agent propertiesinto their shapes, sizes, and facial expressions. For theformer, we showed in Section 5 how to map the agents’communicative behavior to spatial dimensions. In par-ticular, we defined a metric for communicating agentsand proved that it defines a distance in the mathemati-cal sense. In Definition 5.2, we also gave a criterion toderive other suitable distance functions. Furthermore,we reviewed and discussed an algorithm based on sin-gular value decomposition and one on spring embed-ding to position agents in space given their distances.

To summarize, the main contribution of this paperare a visualization of single and multi-agent argumen-tation. To the best of our knowledge there is no othersimilar project. The reasoning systems and more in-formation is online www.soi.city.ac.uk/homes/msch.

Acknowledgements

First and foremost, I would like to thank DanielaPlewe, who initiated and led the Ultima Ratio project.I am indebted to Andreas Raab for some of the UlimaRatio screenshots. I would also like to thank Iara Moraand Gerd Wagner for their cooperation on argumenta-tion.

References

[1] M. Benedikt, Cyberspace: some proposals, in: M. Benedikt(Ed.), Cyberspace: First Steps, MIT Press, Cambridge, MA,1991, pp. 273–302.

[2] T.W. Bickmore, L. Cook, E.F. Churchill, J.W. Sullivan,Animated autonomous personal representatives, in: Pro-ceedings of the Second International Conference onAutonomous Agents, AA98, ACM, New York, 1998,pp. 8–15.

[3] M. Brayshaw, M. Eisenstadt, A practical graphical tracer forprolog, Int. J. Man–Machine Studies 35 (5) (1991) 597–631.

[4] H. Chernoff, Using faces to represent points ink-dimensionalspace graphically, J. Am. Statist. Assoc. 68 (1973) 361–368.

[5] Z. Cui, B. Odgers, M. Schroeder, Distributed systemvisualisation and monitoring using software agents, TechnicalReport, BT Labs, Ipswich, UK, 1998, Filed Patent.

[6] C.V. Damásio, L.M. Pereira, M. Schroeder, REVISE:logic programming and diagnosis, in: Proceedings of theConference on Logic Programming and Non-monotonicReasoning LPNMR97, LNAI 1265, Springer, Berlin,1997.

[7] A.D. Dewar, J.G. Cleary, Graphical display of complexinformation within a prolog debugger, Int. J. Man–MachineStudies 25 (5) (1986) 503–521.

[8] P.M. Dung, An argumentation semantics for logicprogramming with explicit negation, in: Proceedings of the10th International Conference on Logic Programming, MITPress, Cambridge, MA, 1993, pp. 616–630.

[9] P.M. Dung, On the acceptability of arguments andits fundamental role in nonmonotonic reasoning, logicprogramming and n-person games, Artificial Intelligence77 (2) (1995) 321–357.

[10] S.E. Fienberg, Graphical methods in statistics, Am. Statist.33 (1979) 165–178.

[11] J.L. Goldberg, Matrix Theory with Applications, McGraw-Hill, New York, 1991.

[12] N. Goodman. Languages of Art: An Approach to a Theoryof Symbols, Hackett, Indianapolis, 1976.

[13] B. Hayes-Roth, R. van Gent, Story-making with improvisionalpuppets, in: Proceedings of the First International Conferenceon Autonomous Agents, AA97, ACM, New York, 1997,pp. 1–7.

[14] N.R. Jennings, P. Faratin, M.J. Johnson, P. O’Brien,M.E. Wiegand, Using intelligent agents to manage busi-ness processes, in: Proceedings of the First Conference onPractical Applications of Intelligent Agents and Multi-Agents:PAAM’96, London, UK, 1996.

[15] J. Lester, B. Stone, Increasing believability in animatedpedagocial agents, in: Proceedings of the First InternationalConference on Autonomous Agents, AA97, ACM, New York,1997.

[16] E. Neufeld, A. Kusalik, M. Dobrohoczki, Visual metaphorsfor understanding logic program execution, in: W. Davis,M. Mantei, V. Klassen (Eds.), Graphics Interface, 1997,pp. 114–120.

[17] H. Prakken, G. Sartor. Argument-based extended logingprogramming with defeasible priorities, J. Appl. Non-Classical Logics 7 (1) (1997).

[18] N.R. Quinn, M.A. Breuer, A force directed componentplacement procedure for printed cicuit boards, IEEE Trans.Circuits and Systems CAS-26 (6) (1979) 377–388.

26 M. Schroeder / Future Generation Computer Systems 17 (2000) 15–26

[19] J. Rickel, L. Johnson, Integrating pedagogical capabilitiesin a virtual environment agent, in: Proceedings of the FirstInternational Conference on Autonomous Agents, AA97,ACM, New York, 1997.

[20] M. Schroeder, An efficient argumentation framework fornegotiating autonomous agents, in: Proceedings of ModellingAutonomous Agents in a Multi-Agent World MAAMAW99,LNAI1647, Springer, Berlin, 1999.

[21] M. Schroeder, Using singular value decomposition to visualiserelations within multi-agent systems, in: Proceedings of theThird Conference on Autonomous Agents, Seattle, USA,1999, ACM, New York.

[22] M. Schroeder, D.A. Plewe, A. Raab, Ultima Ratio —should Hamlet kill Claudius? in: Proceedings of the SecondConference on Autonomous Agents, Minneapolis, USA,1998, ACM, New York.

[23] J.R. Searle, Speech Acts, Cambridge University Press,Cambridge, UK, 1969.

[24] K.R. Thorisson, Real-time decision making in multi-modalface-to-face communication, in: Proceedings of SecondInternational Conference on Autonomous Agents, AA98,ACM, New York, 1998, pp. 16–23.

[25] M. Wiegand, P. O’Brien, Adept: an application viewpoint,in: Proceedings of Intelligent Systems Integration ProgrammeSymposium, 1996.

[26] S. Yoshida, K. Kamei, M. Yokoo, T. Ohguro, K. Funakoshi, F.Hattori, Visualizing potential communities: a MA approach,in: Proceedings of the Second International Conference onMulti-Agent Systems ICMAS98, Paris, France, 1998.

Michael Schroeder is a lecturer at theCity University, London, UK. He has stud-ied Computer Science at the Universi-ties of Karlsruhe, Aachen, Lisbon, andHanover, where he received his Ph.D. Heis the author of over 35 articles on agents,logic, distributed systems, and visualiza-tion. As an employee of British Telecomhe co-authored a patent on visualizationof distributed systems.