14
Privacy, deontic epistemic action logic and software agents An executable approach to modeling moral constraints in complex informational relationships V. Wiegel*, M. J. Van den Hoven**, and G. J. C. Lokhorst*** Technical University Delft, Faculty of Policy, Technology and Management, P. O. Box 5, 2600 AA Delft, The Netherlands E-mails: [email protected]; [email protected]; [email protected] Abstract. In this paper we present an executable approach to model interactions between agents that involve sensitive, privacy-related information. The approach is formal and based on deontic, epistemic and action logic. It is conceptually related to the Belief-Desire-Intention model of Bratman. Our approach uses the concept of sphere as developed by Waltzer to capture the notion that information is provided mostly with restrictions regarding its application. We use software agent technology to create an executable approach. Our agents hold beliefs about the world, have goals and commitment to the goals. They have the capacity to reason about different courses of action, and communicate with one another. The main new ingredient of our approach is the idea to model information itself as an intentional agent whose main goal it is to preserve the integrity of the information and regulate its dissemination. We demonstrate our approach by applying it to an important process in the insurance industry: applying for a life insurance. In this paper we will: (1) describe the challenge organizational complexity poses in moral reasoning about informational relationships; (2) propose an executable approach, using software agents with reasoning capacities grounded in modal logic, in which moral constraints on informational relatio nships can be modeled and inves- tigated; (3) describe the details of our approach, in which information itself is modeled as an intentional agent in its own right; (4) test and validate it by applying it to a concrete ‘hard case’ from the insurance industry; and (5) conclude that our approach upholds and offers potential for both research and practical application. Key words: action logic, deontic, epistemic, insurance, privacy, software agents Problem description Some of the most pressing ethical issues involving the handling of sensitive information are to be found in complex organizations in which many people are engaged in different roles dealing with distributed information. Each has his particular set of right, obli- gations, sources of information and misinformation, and so forth. The whole complex of informational relationships and interests becomes very hard to oversee. As Van den Hoven and Lokhorst (2002) note: ‘‘...manual reasoning quickly gets overwhelmed. How should one delegate responsibilities, safeguard the flow of sensitive information, protect privacy, and so on, in today’s complex organizational environ- ments? Reasoning about such issues may be trivial so long as one is looking at the level of the individual agents, but the totality may be of mind-boggling complexity.’’ p. 287 Information pervades every corner of life. Life is unthinkable without the technologies that have been developed to deal with all the data that we produce. Companies, private citizens and governmental orga- nizations all deal with the use, application and distribution of data, but they do so from different perspectives. When the multitudes of roles that are involved are also taken into account, the complexity is very daunting indeed. The complexity arises from the * Vincent Wiegel works for a multi-national company in the financial service industry. This article has been written on private title. He has a Master’s degree in both economics and philosophy, and is attached as an extra-facultary re- searcher to the Delft University of Technology. ** Jeroen van den Hoven is part-time full professor (Socrates Chair) at the Department of Philosophy of the Faculty of Technology, Policy and management at Delft University of Technology. He is also Professorial Fellow at the Centre for Applied Philosophy and Public Ethics at The Australian National University, Canberra. *** Gert-Jan Lokhorst M.Sc., MA, PhD, is an assistant professor at the Department of Philosophy of the Faculty of Technology, Policy and management at Delft University of Technology. He has published numerous papers on philo- sophical logic and is currently working on a research project entitled ‘‘Medical Images in the Health Care Process’’. Ethics and Information Technology (2005) 7:251–264 Ó Springer 2006 DOI 10.1007/s10676-006-0011-5

Privacy, Deontic Epistemic Action Logic and Software Agents

Embed Size (px)

Citation preview

Privacy, deontic epistemic action logic and software agents

An executable approach to modeling moral constraints in complex informational relationships

V. Wiegel*, M. J. Van den Hoven**, and G. J. C. Lokhorst***Technical University Delft, Faculty of Policy, Technology and Management, P. O. Box 5, 2600 AA Delft, The NetherlandsE-mails: [email protected]; [email protected]; [email protected]

Abstract. In this paper we present an executable approach to model interactions between agents that involvesensitive, privacy-related information. The approach is formal and based on deontic, epistemic and action logic.It is conceptually related to the Belief-Desire-Intention model of Bratman. Our approach uses the concept ofsphere as developed by Waltzer to capture the notion that information is provided mostly with restrictionsregarding its application. We use software agent technology to create an executable approach. Our agents holdbeliefs about the world, have goals and commitment to the goals. They have the capacity to reason aboutdifferent courses of action, and communicate with one another. The main new ingredient of our approach is theidea to model information itself as an intentional agent whose main goal it is to preserve the integrity of theinformation and regulate its dissemination. We demonstrate our approach by applying it to an importantprocess in the insurance industry: applying for a life insurance.In this paper we will: (1) describe the challenge organizational complexity poses in moral reasoning about

informational relationships; (2) propose an executable approach, using software agents with reasoning capacitiesgrounded in modal logic, in which moral constraints on informational relatio nships can be modeled and inves-tigated; (3) describe the details of our approach, in which information itself is modeled as an intentional agent inits own right; (4) test and validate it by applying it to a concrete ‘hard case’ from the insurance industry; and(5) conclude that our approach upholds and offers potential for both research and practical application.

Key words: action logic, deontic, epistemic, insurance, privacy, software agents

Problem description

Some of the most pressing ethical issues involving thehandling of sensitive information are to be found incomplex organizations in which many people areengaged in different roles dealing with distributed

information. Each has his particular set of right, obli-gations, sources of information and misinformation,and so forth. The whole complex of informationalrelationships and interests becomes very hard tooversee. As Van den Hoven and Lokhorst (2002) note:

‘‘...manual reasoning quickly gets overwhelmed.How should one delegate responsibilities, safeguardthe flow of sensitive information, protect privacy, andso on, in today’s complex organizational environ-ments? Reasoning about such issues may be trivial solong as one is looking at the level of the individualagents, but the totality may be of mind-bogglingcomplexity.’’ p. 287

Information pervades every corner of life. Life isunthinkable without the technologies that have beendeveloped to deal with all the data that we produce.Companies, private citizens and governmental orga-nizations all deal with the use, application anddistribution of data, but they do so from differentperspectives. When the multitudes of roles that areinvolved are also taken into account, the complexity isvery daunting indeed. The complexity arises from the

* Vincent Wiegel works for a multi-national company in

the financial service industry. This article has been writtenon private title. He has a Master’s degree in both economicsand philosophy, and is attached as an extra-facultary re-

searcher to the Delft University of Technology.** Jeroen van den Hoven is part-time full professor

(Socrates Chair) at the Department of Philosophy of the

Faculty of Technology, Policy and management at DelftUniversity of Technology. He is also Professorial Fellow atthe Centre for Applied Philosophy and Public Ethics at TheAustralian National University, Canberra.

*** Gert-Jan Lokhorst M.Sc., MA, PhD, is an assistantprofessor at the Department of Philosophy of the Faculty ofTechnology, Policy and management at Delft University of

Technology. He has published numerous papers on philo-sophical logic and is currently working on a research projectentitled ‘‘Medical Images in the Health Care Process’’.

Ethics and Information Technology (2005) 7:251–264 � Springer 2006DOI 10.1007/s10676-006-0011-5

numbers and fragmentation. The challenge it poses tomoral reasoning arises from the fact that we cannotjust extrapolate our moral reasoning from the indi-vidual-to-individual level to the organizational level.We do not know for sure that our moral reasoningstill applies there. Moreover, we have at least anintuitive notion that it might not be applied withoutmodification. In large organizations we separate tasksand the obligations and rights associated with them,while we distribute subsets of information that wereoriginally provided as wholes, governed by specificsets of conditions and moral restrictions that were notdesigned with a view to the later partitioning.

With the rising intensity and complexity of dataexchange, the need for instruments to control the useof data, to ensure its proper use and to prevent mis-use becomes more and more important. Legislativemeasures are being developed and implemented tothis purpose. But given the scale (unimaginablenumbers of data transaction are being carried outeach second) and scope (many transactions crossborders) it is unlikely that this will suffice.

The challenge for both practitioners and academicresearchers alike is to find tools that abstract from theoverwhelming detail while they are at the same timesufficiently rich to reflect the enormous complexity. Inthis paper, we propose to bring together severalthreads of research from various fields in an attemptto provide an approach that is (a) formal yet practi-cable, (b) can deal with complexity, and (c) is exe-cutable. This approach can be used by researchers toset up experiments and to investigate, for example,emergent behavior in large organizations; it can alsobe followed by practitioners to set up environments inwhich the handling of information is more securethan when it is only governed by paper-based rules.

Solution

In our paper we present an executable approach tomodel interactions between agents that involve sensi-tive, privacy-related information. The approach isformal, based on deontic, epistemic and action logic. Itis conceptually related to the Belief-Desire-Intentionmodel (BDI model) of Bratman (1987). We add to ourapproach the concept of a sphere as developed byWalzer (1983) to capture the notion that information isprovided mostly with restrictions regarding its appli-cation.We use software agent technology to create ourexecutable approach. This serves two purposes. One, itenables academic research on a scale that cannot beachieved through armchair philosophy. Simplybecause the numbers are too big. In addition, prepar-ing a theory for implementation is real challenge

because it forces one to think of all elements that havebeen subsumed under the ceteris paribus clause, havebeen forgotten, etc. Two, it provides venues to opera-tional application outside the academic realm.

Our agents hold beliefs about the world, havegoals and commitments, form intentions. They havethe capacity to reason about different courses ofaction, and can communicate with one another acrossany number of network domains. The key element ofour approach is the modeling of information itself asan (intentional) agent in its own right, whose maingoal it is to preserve the integrity of the informationand regulate its dissemination.

Modeling and implementation

We use different forms of modal logic to formalizeinformation relationships. These forms have beenbrought together in DEAL, deontic epistemic actionlogic (Van den Hoven and Lokhorst 2002). DEALdraws upon several, well-known and widely acceptedmodal logics. We extend it here with the notion ofspheres, which captures the fact the information hasnot the same status in different situations. Informa-tion acquired in one situation cannot just be re-usedor distributed to different situations. Howeversophisticated, such a logic framework alone, how-ever, cannot deal with complexity and is not execut-able either. The key to dealing with both complexityand execution is the same: providing a computer-based implementation of the logic. Using the com-puter to execute the logic potentially provides us witha means to handle real-life complexity. If and whenproven satisfactory the same technique can be usedfor implementation and execution in practice. All thisis easier said than done, though.

We propose to use agent technology (Russell 2003;Wooldridge 2000, 2002) as paradigm for our execut-able framework. It provides concepts that fit nicely tothe situations that we would like to investigate:individuals in a particular capacity dealing withinformation, sharing it with other individuals whomay or may not apply, re-use, distribute that infor-mation, and so forth. This solution is scalable andexecutable as it runs as software on computers, thevery environments where information is producedand stored. The implementation is done using aparticular software package for constructing softwareagents, JACK (AOS 2004). In addition to beingbased on the BDI-model it provides a good fit withmodalities of DEAL. The fit, however, is notcomplete, obligations not being an explicit part ofJACK. The addition of obligations is not problematicsince they can be seen as an extension to the BDImodel (Broersen et al. 2001; Dastani et al. 2001a, b).

V. WIEGEL ET AL.IEGEL ET AL.252

Conceptual aspects

In addition to the above generic implementationaspects, we add three aspects of a conceptual nature.One, we maintain that in informational relationshipsall events that set moral reasoning and activities inmotion can be categorized as being of two types: (a)requests for information, and (b) changes in data.Two, the meaning of information does not have thesame status in different situations, and might not evenbe the same in different situations. And at the time itis provided, this is generally done with implicitrestrictions regarding its use. Three, information canbe modeled as an (intentional) agent in its own right.

TriggersNothing happens without some trigger. In the case ofinformational relationships there are two, and nomore, categories of triggers: (a) requests for infor-mation, and (b) a change in data.

The first occurs when someone needs particularinformation to achieve his goal. He will look forsources that can provide him with the informationand request that information. This triggers a rea-soning process about who might have access to whichdata for which purpose, and may eventually result inactually obtaining the information. The second trig-ger sets in motion a consideration about who has theright to be informed about this change (a change canalso mean the instantiation of new data), and whohas an obligation to do so. Possibly, a third eventmight be the discovery of a wrong belief held bysomeone. If there is an obligation to prevent false-hood from persisting or spreading this requires thanaction. This event can be seen as a change in data,namely my set of beliefs that someone holds only truebeliefs. Thus it can be categorized as trigger of thesecond kind.

SpheresWalzer defines a structure in which people canorganize themselves freely and still achieve some sortof equality, to achieve equality and fairness withoutfalling back to some sort of tyranny. The key to hisapproach is his division of society in spheres. Spherescan be defined according to need and custom. Withina sphere a particular good is distributed. People arefree to organize this distribution, on the conditionthat this particular good cannot be used outside itssphere to gain domination in other spheres. Forexample, domination in the political sphere may notbe used in the sphere of trade. Moreover, in eachsphere symbols, words, acts have their own particularmeaning, which can very well differ from the mean-ings they have in other spheres.

This very notion of sphere can be applied to thecontext of informational relationships. Informationgained in a particular sphere cannot be used to gainadvantage in another sphere, at least not withoutparticular, explicit conditions. Information providedto a physician cannot be used to evaluate the healthstatus with respect to insurance, at least not withoutexplicit consent. Rights to information and obliga-tions (not) to provide information are related to thesphere in which it originated and the sphere of itsintended use.

Information as an intentional agentOne of the key elements in our approach is to modelinformation itself as an agent. An agent with desires,intentions and beliefs about its environment that isable to act. Its main aims are to maintain the integrityof the data, to provide access to all who have a rightto request access, and to inform all who it is obligedto inform. Along with its desires, goals, and so on, ithas the capacity to act and interact with its environ-ment. This is very different from the currentapproaches in which there are one or more owners ofthe information. He or she guards the informationand is charged with the task to inform those whohave a right to be informed, grant or deny access tothose who ask access to the data. In short, informa-tion is always tied to a (human) agent, it is passive.

The structure of the data and the content andcontext of the data determine the dissemination of thedata. There is a clear analogy to the genetic code,which contains both information and the means toreplicate itself. Such an informational agent is by nomeans a trivial thing to accomplish. It requires a clearconceptual distinction between the information itself,and it replication. On the other hand, there is a directand complete dependence of one on the other. Fromthe technical point of view this approach also posessome challenges. For one thing, the data as suchshould not be separable from the mechanisms thatdetermine the dissemination of the data. Anotherinteresting challenge is to create self-aware data.Since the data is no longer a passive but an activeentity it needs some degree of self-awareness.

Modeling information as an intentional agent hasseveral benefits. First, it ensures a consistent imple-mentation and execution of informational rights andobligations, since all information is modeled andexecuted according to the same ‘master template’.Second, as the execution of the actions that ensurethe fulfillment of obligations and the safeguardingof rights is delegated to an entity that is no longerridden with the potential conflicts of interest thatbeset its originator, it is more likely that those rightsand obligations are effectuated. A human agent has

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 253

several interests to cater for apart from guardingprivacy and preserving data integrity. Thirdly, asinformation becomes a self-enacting agent the effortof maintaining rights and obligations becomes less.Once instantiated they take over many of the tasks ofadministration. As information becomes more dis-tributed, administration by a (human) agent becomesmore time-consuming and likely to be forgotten orimpossible due to loss of physical access (think ofnetwork failures, distributed networks with differentaccess rights, etc.).

An objection to this approach is that it is toomechanistic, that it does not allow for the subtletiesthat characterize human interaction. For example,sometimes it is better not to provide all informationand to tell a white lie. Although an automatedapproach might run into this problem, this is not afundamental flaw, but rather the result of ourinability to express these subtleties. Once we haveexpressed them, there is no reason why they cannotbe implemented. On the other hand, one can alsoreason that a lot of harm comes from white lies thatfail to serve their purpose and from actions that aredownright malicious.

The ImplementationThe experimental setting has been constructed usingJACK, a Java extension with development environ-ment. It is based on the BDI model. It offers a nat-ural, conceptual equivalent for DEAL.1 In thissection we show how the conceptual elements, such asobligations, intentions, actions etc. are implemented.Table 1 provides an overview of all elements. Each ofthem is discussed in detail in the subsequent sub-sections. Following the BDI model, JACK has agents

that hold beliefs and data (tuples in a BeliefSet), haveintentions (Plans) that prescribe how to achieve agoal (BDIGoalEvent) and interact with other agents(via MessageEvents). A plan consists of steps(Atomic actions) an agent can take in an attempt toachieve his goal. When several options are availablethe agent can reason about which course of action totake (Meta-level plan).

Agent and predicate/data

Software agents serve as actors in the theory. Anagent is an autonomous entity with some (basic)reasoning capacity, the ability to form intentions andinteract with its environment. At the current stage ofdevelopments such an entity is still very far removedfrom anything like human agents. This is not aproblem for our purpose. In our implementation anagent is the key element, a container of knowledgeand plans, the generator of desires and instantiator ofcommunication. The attributes of the agent are thepredicates in logic. These are forms of basic data.Information is presented as a complex data structure,that is, a combination of data elements.

An information element is, for example, an appli-cation file for a life insurance. It consists of threesubsets of data which are related to each other: per-sonal data, insurance data and medical data. Thismight complicate moral reasoning considerablybecause reasoning about the application file requirestaking into consideration different rights and obliga-tions regarding the subsets that make up the applica-tion file, and that might have conflicting implicationsabout what is (not) allowed, obligated, etc.

From a logical point of view, data are representedby predicates. The computational equivalent inJACK is an attribute of an object or a member of aclass. Young (Andrew) in predicate logic reads as‘Andrew is Young’, where Andrew is, for example, ahuman. This would be implemented as

public Human(String name, String seniority) extendsAgent//two slashes forward means comment in the code//the variable ‘name’ has the value ‘‘Andrew’’,‘seniority’ the value ‘‘Young’’{String Seniority = seniority;

super(name);}

This code would be executed in the experiment bycalling this method to create a new instance of thespecies Human.

new HumanðAndrew;YoungÞ;

1 In setting up the experimental environment we had sev-eral requirements. First of all it must be formal, having aprecisely defined syntax and semantics, whose properties arewell-known. The implementation has to support themodeling

elements. It has to support reasoning at a high level ofabstraction. This means it has to provide implementation ofreasoning concepts on par with the modeling constructs (such

as intentions, action logic, etc.) – in short it has to be con-ceptually suitable. In addition, support for meta-levelreasoning is required. It has not just to be executable, but

executable across different computer (networks), i.e. it mustsupport distributed computing. New reasoning methods,support for new roles using new agent types should be inte-grated without have to adjust major parts of the implemen-

tation, its setup must be modular. On top of that it must bescalable. Since the core of the problem we are investigating iscomplexity arising from the large scale of operations scala-

bility is of utmost importance. Support for industry standardsfor programming and computing is desirable when it comes toimplementing the approach in real-life situation.

V. WIEGEL ET AL.IEGEL ET AL.254

In this example a particular ‘human’, an extension ofthe class2 Agent, is created and assigned the name‘‘Andrew’’ with additionally the seniority ‘‘Young’’.The main difference with predicate logic is that anattribute has an explicitly used label, whereas inpredicate logic the label is implicit.

Sphere

The notion of ‘sphere’ is conceptualized as follows.Each data element has an attribute ‘sphere’ that is setwhen the data is instantiated and can, under certainconditions, be changed during the lifetime of the data.When access is requested to the data, the intendedapplication domain accompanies the request. Thisthen is checked against the ‘sphere’ attribute of thedata element.

The actual modeling is in part generic and in partspecific to the domain. As a general rule the data-subject always has right to access the data and theright to disseminate them. The right to change them,however, is already specific. For example, an agent’shealth information is maintained by a physician whois responsible for the correctness of the data. Thepatient is not allowed to change the informationabout his health. He is, however, free to tell about hishealth to whomever is interested in his health, whichis something that the physician is not allowed to do.We distinguish three basic roles regarding informa-tional relationships: data-subject, data-administrator,and stakeholder. With regard to the role of data-administrator there are some complex issues involv-ing delegation of the role and the associated rights

and obligations. For example, a physician may havean assistant to execute several of her obligations.Such an assistant holds the rights and obligations byproxy. The delegation is only partial and creates someadditional obligations on the part of the delegator.Each agent with a particular role has a sphereattached to it in which it operates. The sphere definesthe domains in which the agent operates and therestrictions it has in applying the information itreceives.

In the implementation of beliefsets, which arediscussed in section Bliefs, the beliefsets contain astandard additional field ‘sphere’ where the sphere inwhich the data have been provided, acquired, and soon, are defined.3 There are also fields to store infor-mation about the structure of the data, i.e., who is thedata-subject, the data-administrator, etc.

Data and roles come together in plans. Plans areexecuted by agent in a particular role and aimed atacquiring, providing and withholding information.The execution of a plan is subject to two conditions:relevance and context. Relevance determines in gen-eral whether the plan is suited to handle a particulartype of events, for example, whether a physician canhandle a request for information from a patient. Thecontext next finds out whether execution in theparticular setting is allowed, for example, whetherDr. Elby is the physician of patient Drostel. Bothoperators are logical propositions that are evaluatedbefore executing a plan and return either true orfalse. The operators are implemented as methods inthe plans that try to unify logical variables with theavailable knowledge. If the unification succeeds the

Table 1. Implementation of logic elements

Logic element Implementation

Agent AgentPredicate, Data, Information Class members/Object attributesSphere As in Predicate, with extension of context() and

relevance() operatorsEpistemic/Knows/Beliefs BeliefSet with closedWorld and openWorld semanticsTrigger change in data,

request for information

Modfact() operator plus sending of an event

RequestThat{}Action – STIT (‘see to it that’operator) Inform{}, RequestThat{}

Plan with @achieve and/or @insist and/or @sendstatements plus BDIGoalEvent intending tochange beliefSet or to get an answer

Obligation Permission, Forbidden Plan with action statements plus context()and relevance() operators

Desire BDIGoalEvent

Intention Plan

2 A Class is a construct in object-oriented programming

that represents a particular type of entity (or data type). Itsinstances are objects of that particular type, that are gen-erated in the execution of the software code.

3 For the reader interested in programming, this is

implemented using a class sl_beliefset that extends the be-liefset class and contains additional fields and methods tohandle the sphere operations.

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 255

methods return the logical value true. Both the rele-vance() and the context() operators must return truebefore a plan can be executed.

Beliefs

An agent holds beliefs about the world. Reasoningabout what one knows or believes is captured byepistemic logic, which has two operators: Ba (agenta believes that) and Ka (agents a knows that)[25:284]. KaA states that agents knows that A is thecase.

In our implementation knowledge and beliefs arecaptured in one component: beliefsets. We make nodistinction between knowing and believing. The dis-tinction is gradual. And although it is relevant in real-life, implementing the distinction between knowledgeand belief is beyond the scope of this article.

Beliefsets are modeled as first-order tuple-basedrelations. The beliefs can be true or false, stating thatthe agents believe the statements to be true or false. Asa refinement, we have two options: a closed-worldsemantics and an open-world semantics. In the first,every possible statement has a truth-value. All state-ments that are believed to be true are listed. Allstatements not in this list are, by definition, supposedto be false. In the open-world semantics, both the falseand true statements are listed explicitly. Statementsthat are in neither list are assumed to be unknown.The beliefsets can be queried using pre-definedqueries. Beliefsets are implemented as follows:

public beliefset PatientHealth extends ClosedWorld {# key field String patientName;# key field String patientID;# value field String liverCondition;# value field String heartCondition;# value field String lungCondition;indexed query get(String patient, String ID, logicalstring lungCondition);...

}

This example is about the health status of a patient.The beliefset contains information about the patient(his name and ID), and the health status of his heart,lungs and liver. The key fields indicate what part ofthe information can be used to query the beliefset. Inthe example we can ask after the health status of thelungs by using patient name and ID.4

Triggers

To model the two triggers that set moral reasoningregarding information in motion we need (a) twosorts of actions, and (b) a notion of a ‘change indata’. Both actions are derived from the primitiveoperator of action logic. The ‘change in data’ iscaptured by a combination of predicate logic andtemporal logic.

Ad A) Action logic is a branch of modal logic. Itsoperator is STIT, ‘‘see to it that’’. [a STIT: A] meansagent ‘a’ sees to it that ‘A’ is done. In the context ofinformational relationships that STIT operator canbe supplanted by two more specific, more expressiveoperators: Inform and RequestThat.5 These opera-tors can be reduced to primitive modal operators inDEAL.

fInform i g a/g; i informing group g of / through

action a

This is a specific expression of the more generic formof

½i STITa : A� where A def Kg/ and informing is spe-cific instance of STIT6

A request to someone else is expressed as

fRequestThat i g a/g; i performing a in order to

get g to intend /

{Inform}is used to model actions of informing otherpeople about something. The request to receiveinformation is modeled using the {RequestThat}operator. E.g. a request for information x is modeledas

fRequestThat i g a/g where / deffInform g i b xg

In our programming environment these operatorsare by execution of plans that contains atomicactions that send events containing some informa-tion to other agents. This is illustrated by the codebelow. An agent of type Patient sends a particularevent HealthInfo using plan ProvideInfo to do so. Itsends a message to Insurer, another agent. Themessage contains an attribute ‘Value’ that containsthe actual information, in the example the value‘Good’.

4 There are many forms of queries possible from very

simple to complicated, nested queries. An exposition of thepossibilities is beyond the scope of this article. It is howevera very powerful instrument.

5 See Wooldridge (2000) for a detailed presentation ofthese operators as they are defined in LORA (logic of ra-tional agents). Note that these operators are not modaloperators or predicates but complex action constructs.

6 The sub-script on the STIT operator indicates that is aparticular type of STIT operator.

V. WIEGEL ET AL.IEGEL ET AL.256

public agent Patient extends Agent#handles event RequestInfo;#sends event HealthInfo;#uses plan NewFact;#uses plan MoreInfoRequired;#uses plan ProvideInfo;#private data Health myHealth();

....try {

myHealth.add(‘‘heart’’,’’good’’);//initiate valuesmyHealth.add(‘‘lungs’’,’’good’’);myHealth.add(‘‘liver’’,’’bad’’);

}catch (Exception e){}.....}

}public plan ProvideInfo extends Plan{

....# reasoning methodbody() {

HealthInfo.Value = getHealthStatus(‘‘lungs’’,healthStatus); //get health of lungs

@send(‘‘Insurer’’ , HealthInfo); //sending toinsurer agent}

}

Ad B) Change involves the concept of time. Conceptsof time are introduced through temporal logic.7 Inlogic terms change means that a predicate holds trueat some moment and not at the next. ‘t’ indicates atime point, t1 time point 1, t1 ... tn time points 1through n.

A change in data is defined as

GoodConditionðLiverÞt1 ^ :GoodConditionðLiverÞt2which can be expressed using the temporal pathconnective Ow where O means ‘next’, so Ow is truenow if w is true next, Wooldridge (2000:57). A changeis thus defined alternatively as

:w ^Ow

The computational implementation of this notion isas follows. A beliefset (a dataset) posts an event whenan attempt is made to change data, after data havebeen added to the beliefset or after they have beenremoved, where the removal can occur because ofinconsistencies with the current data or because ofkey constraints.

public beliefset Health extends OpenWorld {#posts event HealthChange evHealthChange;#key field String bodyPart;#value field boolean healthStatus;#indexed query getHealthStatus(String bodyPart,logical boolean healthStatus);

..........public void modfact (Tuple t, BeliefState is, Tupleknocked, Tuple negated){

postEvent(evHealthChange.Unhealthy());}

}

This example shows how a change in the health statustriggers automatically an event, evHealthChange.This event in turn can trigger the execution of par-ticular actions such as informing interested partiesabout the change.

Obligations

Deontic logic has one basic operator, O, ‘‘it isobligatory that’’. Its argument is a sentence that saysthat an agent brings about a certain state of affairs.E.g. it is obligatory to stop for the red traffic light,means that each agent that finds himself in a situationof approaching a red traffic light has to perform theaction of bringing his car to a stop. Two otheroperators can be derived from this primitive opera-tor: P (PA, it is permissible that A,:O:A), F (FA, itis forbidden that A,O:A), Van den Hoven andLokhorst (2002, p. 284).

In the context of informational relationships wehave argued that there are two triggers that define allrelevant instantiators of morally relevant behavior:(1) a change in data and (2) a request for information.These trigger obligations, permissible or forbiddenactions, e.g. informing an agent on the new infor-mation.

Obligations are modeled as conditional modalities(conditional on the triggers) that lead to anInform{} action. From the implementation point ofview obligations are plans of action (an obligationto do something) that are triggered by an event. Theevents in turn are triggered either by a change indata or by an agent who wants to be informed. Theexecution of a plan is conditional on truth evalua-tion of two propositions: context() and relevance().By constructing these propositions in such a waythat they are true in all required situations, with theresult that the plan is executed, we have enforcedthe execution of obligation. This is illustrated asfollows.

7 We adhere to the notional standard and definitionsgiven by Wooldridge (2000, 136 ff)

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 257

public plan InformMedical extends Plan {#handles event HealthChange evHC;

context(){evHC.Value >20;

}#reasoning methodbody(){

//this is a particular method that is called and informsthe patient

InformPatient()}

}

In this example the obligation to inform a patientwhen his health deteriorates – blood sugar isincreasing – is implemented. A change in data triggersan event (HealthChange) that is handled by a plan(InformMedical) if the value (evHC.Value) is below acertain threshold. The actual action of informing isdefined in the reasoning method. Likewise, modalitiessuch as ‘permission’ and ‘forbidden’, which aredefinable in terms of obligation, can be implemented.

In the above set-up a software agent cannot choseto ignore the obligation. This is a very limitedimplementation, which is perhaps not very interestingfrom a moral point of view. The inability to cheattakes away some of the most interesting questions.The up-side is that this mechanism provides a means‘‘to protect privacy through technology rather thanlegislation’’. Our implementation has been extendedto include different, conflicting interests. Meta-levelreasoning is required to solve these. This brings backthe full range of morally interesting dilemmas.

We implement several plans that are all able todeal with a specific event, as illustrated in Figure 1.Each plan represents a specific moral obligation or achance to achieve a particular goal, and so on.Determining which plan takes precedence is a meta-level reasoning process. For this purpose a specifictype of plans is implemented which collects all rele-vant plans and reasons about which plan to execute,or in BDI terms, which desire prevails and whatintention is formed.

At the meta-level three different mechanisms areavailable to decide which one of the conflicting desirestakes the upper-hand: prominence, precedence, andexplicit reasoning. Prominence is defined by the orderin which the plans are available to the agent: the firstone is executed, and if it fails the next, etc. In prece-dence, plans are explicitly given a ranking that decideswhich one is chosen. These two mechanisms providethe equivalent of what Pollock (1995) calls the Q&Imodules, the quick and inflexible modules that have

certain rules quasi hard-wired into our system. Just aswe do not need explicit reasoning in order to catch aball that is thrown at us, most of us do not require anythinking to know that torturing a helpless animal ismorally wrong. At times an explicit consideration ofconflicting interests is required that take into accountall current knowledge of the specific situation andweighs the (dis)advantages of each option. To thispurposes meta-level plans are constructed that consistof atomic actions (e.g. getting additional information,using first-order and modal logic propositions).

The way obligations are treated might seem dif-ferent from the usual way of treating moral obliga-tions. This can be explained by the fact the obligationsand norms in the BDI context can be seen rather aspossible extensions of goals than as fundamentalconstituting elements (Dastani 2001b, p. 8). This doesnot, however, diminish the functional equivalence ofthe software implementation proposed. With theaddition of the meta-level reasoning capability, thefull moral reasoning spectrum can be supported.

Role-rights matrix

Rights and obligations need to be defined, eventhough their origin is not discussed in the article.They will be assumed according to need and custom.We will use a ‘role-rights’ matrix. It matches roleswith the rights and obligations they have.

If data XYZ is about the medical data of the clientthe tables specifies that, for example, in addition tothe client himself, a physician employed by theinsurer and the attending physician are allowedreading access to the data. Only the client can grantcertain rights/permissions to other roles; he is notallowed to change his medical record, but this isallowed for his attending physician; etc.

The actions defined in the columns are the specificinstances of the STIT operator and the deonticmodality. The operators can be subject to specificepistemic conditions. The obligation to informsomeone about something implies the knowledgeabout that something. Further conditions are derivedfrom the sphere in which the data are being used andhave been provided.

Desire and intention

An agent has desires it aims to fulfill. The desires arethe motivating element that brings an agent to action.Desires are varied, and can be conflicting. Norms canbe seen as a particular form of desire, or at leastconceptually on par with desires. Resolving the rela-tive importance of the various desires the agentcommits to realizing a desire. In doing so an agent

V. WIEGEL ET AL.IEGEL ET AL.258

forms an intention. It devises a plan to achieve aparticular goal, i.e. satisfy a desire.

To implement these concepts we have a particulartype of event: BDIGoalEvent. This event type rep-resents a desire. Posting it mean the agent will try tofind one or more plans that can handle the event. Thismeans finding the plans that have the potential toachieve the goal. Selecting a plan to handle theBDIGoalEvent means forming an intention. BDI-GoalEvent are special in the sense that in case thereare several ways (plans) to fulfill the desire (achievethe goal) they can trigger another event that sets off ameta-level reasoning process.

Illustration

With the above equipment we can logically modelinformational relationships. To illustrate the

expressive power of DEAL we now discuss brieflysome examples (the numbers refer to the numbersin the role-rights-matrix in the preceding para-graph).

(1) ‘everyone (x) has the right to provide (a) infor-mation he has about himself (A(x)) to anyone (y)’

8x;8yðPð½xSTITa: KyAðxÞ�ÞÞor alternatively8x;8yðPðInformfx; y;a;AðxÞgÞÞ

‘everyphysician (y) must inform his patient (x) of anyfact (A) about patient’ or ‘every patient has a right toknow any fact about his health’

8x; 8yðPhysicianðy; xÞ ! O½ySTITa: KxAðxÞ�Þ

or alternatively ‘physician (y) must inform patient(x) of any fact (A) about patient by phoning/telling/writing (T) the patient’

Medical Information(Data)

postsInformational Agent

Moral Agent

Changed Data

handles handles

Obligation To Inform

sends

Protect Privacy

All relevant plans are selected.

This triggers a special event which is handled by the meta-level plan. Meta-level reasoning decides which of the plans is executed.

Meta level Moral Reasoning

Inform Stake holder

handles

Update Knowledge

modifies

My Health

(Health Data)

Figure 1. Obligation to inform on data change.

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 259

8x8yðPhysicianðy; xÞ ! OðInformfy; x;T;AðxÞgÞÞ

(4)‘no physician (y) is allowed to provide healthinformation (A) about a patient (x) without consentfrom the patient (x) to anyone (z)’

8x; 8y; 8zððPhysicianðy; xÞ ^ :Consent xÞ !Fð½y STITa : KzA(x)�Þ

(3) & (6) ‘x has to the right to do A’: P([x STITa: A])(2) ‘Granting permission:if x has the right to do A then x has the right to grant ythe right to do A’

8x;8yðPð½xSTITa:A�Þ!Pð½xSTITb:Pð½ySTITa:A�Þ�ÞÞ

Insurance and privacy

To test and validate our approach we apply it to aconcrete case. We implement the process of applica-tion for a life insurance. This involves modelingobligations, permissions and actions that can(not) betaken by the different actors depending on their role,i.e. the broker, the insurer, the applicant, hisattending physician and the insurer’s physician. Life-insurance poses the ideal setting for experimentationwith and testing of models dealing with information.It is complex because there are many roles, it isrelevant in real life, it deals with privacy-sensitivedata, it has potential conflicts of interest, and it has along time span, which makes it apt to change. Itserves to show the relevance of the approach topractice.

The test case

The insurance industry is a particularly information-intensive industry. The information is, moreover,sensitive, and, at least in the case of life-insurance,bound to strict rules as to who has access to whichinformation. In the processing and administration ofpolicies a number of different people are involved.The applications and policies themselves containdifferent subsets of information, such as client data(e.g. address, marital status), insurance information(e.g. what policy, which coverage) and client healthstatements. This information is used in several, dif-ferent settings: to decide on acceptance of the client’sapplication, in generic analysis on risk profiles (e.g.which body-mass index has a higher risk profileregarding certain diseases and should therefore pos-sibly have a higher premium attached?), in adminis-trating the policy, etc.

We will now describe a concrete instance of sen-sitive information handling in the insurance

industry. It captures the core elements, but abstractsaway from many details. A (potential) client (R1)8

applies for a life insurance. To decide whether toaccept the application or not the insurance companyrequires particular information. It uses an applica-tion form that the client has to use to submit therequired information (data). The application formconsists of three separate sub-forms, one containinggeneric client data (D1), one containing informationregarding the requested product (D2), and onecontaining medical information about the client(D3). The client provides this information for use indeciding on his application (S1), with the under-standing that this information will be treated confi-dentially: the information will not be used outsidethe insurance company, and the use is also restrictedwithin the insurance company. It can be used inanonymized form for policy-making purposes (S2).The client sends his personal data and the requestedproduct details to the new business applicationdepartment of the insurer, where it is processed byan administrative employee (R2). The medicalinformation is sent separately to the medicaldepartment, where it is processed by a medicalunderwriter (R3). The medical underwriter consults,if necessary, the insurer’s physician (R4) to evaluatethe client’s medical history and current status inregard to the insurance company acceptance policy(D4). If the provided information is insufficient tocome to a conclusion the underwriter informs theclient that additional information is required fromthe attending physician (R5). If the client wants toproceed with the application he is required to givehis written consent (D5) stating that the attendingphysician is granted permission to discuss the cli-ent’s medical data as far as relevant for the appli-cation, and in reply to a concrete question thatconcerns a specific section(s) of the medical form/questionnaire (S3). When the consent is given andreceived, the insurer’s physician writes a request tothe attending physician asking him to provideinformation about the nature of a specific ailment/treatment (D6). The attending physician is bound totell the truth and protect his patient’s privacy. Nowin this context several situations can arise in whichinterests conflict or unintentional errors are made.

1. The insurer asks the attending physician informa-tion about aspects that are not covered by theconsent.

2. The attending physician sees that the informationprovided by the client/patient about other aspectsthan those covered in the consent is false.

8 R denotes a role, D data and S a sphere.

V. WIEGEL ET AL.IEGEL ET AL.260

3. The insurer or an individual employee sells data toa pharmaceutical firm for direct mailing or generic,postal code based marketing.

4. The applicant sends medical information in thewrong envelope, as a result of which it ends upwith the administrative employee instead of themedical underwriter.

5. Marketers request information about policyhold-ers for policy-making purposes.

This is only a small list of many things that posepotential conflicts of interests, decision points foractors. In the above rudimentary example, with sixdata elements, five roles and three spheres com-plexity is already considerable. If one consider therole-rights matrix, Table 2, which shows only alimited set of six actions, in addition to this, it iseasy to understand how complexity grows expo-nentially. The potential for misinformation, mis-use,and so on, is enormous.

Results from the experiments

Setting up the experimentSetting up the experimental setting meant designingand implementing the software agents. We distin-guish two basic types: informational agents (infor-mation) and moral agents (representing humanactors). The latter are further extended to representthe different actors such as physicians and applicants.Agents have plans at their disposal to inform otheragents, to grant access rights to others, to requestinformation, to withhold information, and so forth.These plans deal with events, use data that the agenthas, and so on (see Figure 2). These elements havebeen set up generically. They are parameterized sothat for experimenting purposes plans, actions, andso on, can be changed without having to rebuild the

basic constructs. Parameters are read via a configu-ration files at run-time. The advantage is that in thisway the scale of the experiment can be increasedwithout effort.

Is it possible?Running the experiments shows it is possible to captureall the moral concepts that are relevant to informa-tional relationships in an automated environment.Actors, obligations, rights, acts, and of course infor-mation itself, can all be modeled and implemented inexecutable code. In addition to this, the particularlychallenging aspect of meta-level reasoning is capturedusing (meta-)plans. Software agents represent moralagents. They have several, different and sometimesconflicting courses of actions available to them. Meta-level reasoning facilitates the choice between thesecourses of action. Meta-level reasoning itself is facili-tated at three different levels of sophistication.

Defining and implementing the agents, the plansand the events proved to be substantial work. Asurprising finding was the relatively low level ofcomplexity of the individual plans and events. Thecontext() propositions usually contain one to fourlogical variables. The plans themselves are not verycomplex in a programming and logical sense. Theysend events, read data, update their knowledge base,and so on. The challenging part is in the chain ofplans and events and the complex web they consti-tute. After adding several agents with their plans andevents, the number of potential relationships soonbecame daunting. Keeping track of all the elementsproved very hard though we had an automatedenvironment available to assist us. We found thathaving parameterized plans and events helped indealing with the complexity. Nonetheless, even withthe technical support it proved hard to keep an

Table 2 Role-right matrix

Relating todata XYZ

(health relateddata)

Right toprovide

data (1)

Right toconsent/

delegateright (2)

Right to usedata (3)

Obl. to tell thetruth (4)

Obl. to protectpatient privacy (5)

Right to decide onacceptance (6)

Client/Data-Subject (R1)

Y Y Y Y N N

Administrativepersonnel (R2)

N N N Y N Y

Medicalunderwriter (R3)

N N Y Y Y N

Insurerphysician (R4)

N Y Y Y Y N

Attending

physician (R5)

Y Y Y Y Y N

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 261

overview. We believe that without the technical sup-port we would soon have been lost, and unable tooversee the consequences of our modeling decisions.Interesting in this context is also that complexity ofadministrating the experiment decreases after theinitial increase. Once all basic mechanisms are put inplace, extending the experiment with new agentsproves fairly easy.

Information as an intentional agent gives clearlymore control over information spreading and avail-ability. The price paid is an increased overhead incommunication. Rather than mental or technicalinternal reasoning within a human’s mind or a soft-ware agent’s own classes, communication acrossdomains is required. In technical terms this meanslonger processing times sending triggers and dataacross. These processes are also more CPU intensive.Although this not a technical problem on the presentscale, it might become one when the scale is enlarged.

Two types of triggers for moral reasoningOne of the benefits that come from the implementa-tion of a theory is that it requires a complete speci-fication of the theory. There is no room for ceterisparibus clauses, nothing can be assumed given andnothing can be overlooked. One of the challenges inimplementing the informational relationships was thedistinction between the two types of events that giverise to moral deliberation: change in data and requestfor information. Two questions arose: (a) can thesetwo different types be handled by the same reasoning,and (b) do they have the same moral status?

In our approach both types can be handled byusing the same techniques and elements. Functionallyand technically they are identical. The difference isnot in the execution (the act) itself but in the condi-tions under which it is triggered. This showed itselfwhen implementing the constructs required.Although the plans implementing the obligationshave the same functionality, at first we could not usejust one of them because of the restriction that a plancan only handle one type of event, while these aredifferent events. This pointed to the insight that theysomehow share some basic feature while they differ insome other respect: the situation in which the obli-gation arises.

The change in data might occur without any of thestakeholders being aware of it. They might not evenbe aware of the data’s existence. This makes thestakeholders more dependent on the proper executionof the obligation to inform. There exists an asym-metry regarding the information. This would be evenmore pronounced in case only some of the stake-holders are aware the data exist.

New notionsDuring implementation we were confronted with thechallenge of providing simple overviews of what wasgoing on in our experiments. Agents were commu-nicating, exchanging information, updating theirknowledge base, and so on. In order to keep track ofall these interactions and this data exchange wedeveloped, applied and tuned some notions webelieve to be of theoretical and practical use: (a)moral Chinese walls and (b) deontic isographs withcommunication tracking.

Ad A) Using the notion of sphere we can definedomains in which knowledge should (not) becomeavailable. Based on the role of an agent, he can beassigned to a domain, and has or does not have theright to obtain a particular piece of information. Thedomains are separated by a kind of ‘Chinese wall’that should not allow information to filter through.Using this notion we can check whether the set-up ofrights, obligations and spheres are functioning as theyshould. If information is obtained by an agent inanother domain (on the other side of the wall) itbecomes clear that something is working out in adifferent way than intended.

Figure 2. Software constructs in the experiment.

V. WIEGEL ET AL.IEGEL ET AL.262

Ad B) Continuing the approach from A, and thevisual presentation, one sees a pattern develop.Agents that have access to particular informationform together a deontic and epistemic isograph, thatis a series of an epistemically and deontically equallyendowed agents. Using visual means to track thecommunication between agents we introduce a newtool for analysis.

Figure 3 illustrates the notions introduced in thissection.9 The different spheres as described in sectionThe test case are presented with the functionaries inthe role they have. Highlighted areas indicate whichfunctionaries have knowledge of a particular data.These elements form, as it were, a deontic and epi-stemic isograph. The separation of the spheres pro-vides an analogue of a Chinese wall, i.e., a separationof spheres through which information should not beexchanged. In this case personalized data can beexchanged between the administration and the med-ical departments of the insurer, but not with thepolicy making department.

Conclusion and next steps10

Our experiments indicate that implementing a DEALlogic is possible. All morally interesting concepts canbe implemented in executable code that reflects thetheory and operates at a level that is sufficientlydetailed for both research and application purposes.The expressiveness is rich enough to capture all rel-evant moral and informational notions. The chal-lenge is in explicating these notions and putting theminto a coherent whole, which involves much work.But rather than holding this against our approach, wethink the approach helps in identifying and articu-lating what we need to express in order to have acomplete theory of moral constraints in complexinformational relationships.

There are two noticeable potential constraints orlimitations of our approach which will require furtherresearch. On the one hand, the original BDI model asBratman presented it seems ill-suited to capture some

kinds of behaviour, in particular, learning andadapting. But of course embedding learning andadapting in systems as ours, is crucial in any ‘‘ethical’’context. It would be interesting to investigate how thespecific model provided in the paper could be exten-ded to incorporate adaptive behaviour.

On the other hand, the BDI model embeds‘‘anthropomorphic’’ assumptions, such as intentions,that might seem inappropriate in systems with arti-ficial agents. We have not touched on this issue in ourarticle. Future work will need to give a properunderstanding of these assumptions in an artificialcontext, or replace them by more suitable notions,such as goal-directedness.

We discussed how our approach might be appliedto insurance industry where private, medical data areprocessed and privacy issues can arise. In addition,we think it suitable for exchange of medical databetween providers of medical services, such as generalpractitioners, apothecaries, and surgeons. There is alot to say in favour of a persons medical data beingonline (electronic patient file) accessible for manydifferent providers of medical service, especially incase of emergencies. On the other hand such anavailability would pose some serious risks for theprivacy. A third potential application can be seen inthe financial and business services industry. Somecompanies act as both account and investmentmanager for its clients. Information obtained in onecapacity might influence the behaviour in the other.E.g. helping a company to issue new shares (andgetting paid a big fee for this service) might influencethe opinion formed by the accounts (‘company is notdoing so good’, information which would influencethe share price and hence the fee).

Although our findings are not final or conclusiveyet, and the work is still very much in progress, ourunderstanding of the relevant themes has increased.Some new issues have come up and others are betterunderstood. We do not want to argue that theseresults could only have been obtained through thisapproach. We do argue, however, that the approachhas contributed to achieving the results and is par-ticularly appropriate. Our approach stands in a newtradition that merges computational facilities withphilosophy, a tradition that was started by Thagard(1992), Bynum and Moor (1998, 2002), Castelfranchi

Figure 3. Visual tracking: Chinese walls and deontic isographs.

9 We present here a conceptual prototype that is not yetintegrated into our experimental environment.

10 We would like to thank Philip Brey for some insightfulcomments and suggestions for further research.

PRIVACYRIVACY, DEONTICEONTIC EPISTEMICPISTEMIC ACTIONCTION LOGIC ANDOGIC AND SOFTWAREOFTWARE AGENTSGENTS 263

and Conte (1995) and Danielson (1992, 1998), andshows ample opportunity for further extension.

Our approach shares some commonalities withother (non-)moral research. Governatori (2002), forexample, uses software agents to implement negoti-ating strategies for electronic commerce. The basicagent design is similar to ours: sharing finite statemachine mechanisms in defining the plans andstrategies; the modular components; the use of exe-cutable, distributed software; formal logic to expressthe reasoning schemes. The subject-matter, however,is very different. The subject-material that we studiedhas also been studied by Broersen et al. (2001) andDastani (2001a, b), who investigate decision makingin a BDI context extended with obligations. Ouremphasis is on the execution and experimentationwhereas theirs was more formal and oriented towardsdecision-theory.

In conclusion, we think there is both a sufficientlyrich theoretical basis and sufficient implementationalevidence to warrant proceeding along the lines of thepresent approach.

References

Agent Oriented Software-AOS, Pty. Ltd. (2004) JACK,url=http://www.agent-software.com.au.

M.E. Bratman, Intention, Plans and Practical Reasoning.Harvard University Press, Cambridge, 1987.

J. Broersen, M. Dastani, Z. Huang, J. Hulstijn and L. van

der Torre. The BOID architecture. In Proceedings of theFifth International Conference on Autonomous Agents(Agents 2001). Montreal, 2001.

T.W. Bynum and J.H. Moor, The Digital Phoenix. Black-well Publishing, Oxford, 1998.

T.W. Bynum and J.H. Moor, Cyberphilosophy. BlackwellPublishing, Oxford, 2002.

C. Castelfranchi and R. Conte. Understanding the Func-

tions of Norms in Social Groups through Simulation, InN. Gilbert and R. Conte, editors, Artificial Societies,UCL Press, 1995.

P. Danielson, Artificial Morality. Routledge, London, 1992.

P. Danielson,Modeling Rationality, Morality and Evolution.Oxford University Press, New York, 1998.

M. Dastani, J. Hulstijn and L. van der Torre. The BOID

Architecture: Conflicts between Beliefs, Obligations,Intentions and Desires, In Proceedings InternationalConference on Autonomous Agents, 2001a.

M. Dastani, J. Hulstijn and L. van der Torre. BDI andQDT: A Comparison based on Classical Decision The-ory, In Proceedings of GTDT2001, Stanford, 2001b.

G. A. Governatori, Formal Approach to NegotiatingAgents Development, In Electronic Commerce Researchand Applications, 1 no. 2, 2002.

S. Russell and P. Norvig, Artificial Intelligence, 2nd edition,

Prentice Hall, 2003.J.L. Pollock, Cognitive Carpentry. MIT Press, Cambridge,1995.

P. Thagard, Conceptual Revolutions. Princeton UniversityPress, Princeton, 1992.

J. Van den Hoven and G.-J. Lokhorst. Deontic Logic and

Computer Supported Computer Ethics, In Bynum et al.editors, Cyberphilosophy, 2002.

M. Walzer, Spheres of Justice. Basic Books, New York,

1983.M. Wooldridge, Reasoning about Rational Agents. MITPress, Cambridge, 2000.

M. Wooldridge, MulitAgents Systems. John Wiley & Sons,

Chichester, 2002.

V. WIEGEL ET AL.IEGEL ET AL.264