32
Mind & Society, 1, 2000, Vol. 1, pp. 109-140 2000, Fondazione Rosselli, Rosenberg & Sellier Through the Agents' Minds: Cognitive Mediators of Social Action Cristiano Castelfranchi National Research Council - Institute of Psychology Group of"Artificial Intelligence, CognitiveModellingand Social Simulation", viale Marx, 15 - 00137 Roma, Italia, e-mail."[email protected] (ReceivedJuly 1998, aecept#dMay 1999) Abstract Thesis'. Macro-level social phenomena are implemented through the (social) actions and minds of the individuals. Without an explicit theory of the agents' minds that founds agents' behavior we cannot understand macro-level social phenomena, and in particular how they work. AntiThesis: Mind is not enough: the theory of individual (social) mind and action is not enough to explain several macro-level social phenomena. First, there are pre-cognitive, objective social structures that constrain the actions of the agents," second, there are emergent, unaware or non-contractual forms of cooperation, organisation, and intelligence. Synthesis': The real challenge is' how to reconcile cognition with emergence, intention and deliberation with unknown or unplanned social functions and "social order". Both objective structures' and unplanned self-organising complex forms of social order and social function emerge from the interactions of agents and from their individual mental states; both these structures and self-organising systems feedback on agents' behaviors through the agents' individual minds. Keywords deduction," induction; mental models' I. Cognitive mediators of social phenomena The most important fact concerning human interactions is that these events are psychologically represented in each of the participants (Kurt Lewin, 1935) Thesis: Macro-level social phenomena are implemented through the (social) actions and minds of the individuals. Without an explicit theory of the agents' minds that founds agents'behavior we cannot understand and explain macro-level social phenomena, and in particular how they work. 109

Through the agents' minds: Cognitive mediators of social action

Embed Size (px)

Citation preview

Page 1: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1, pp. 109-140 �9 2000, Fondazione Rosselli, Rosenberg & Sellier

Through the Agents' Minds: Cognitive Mediators of Social Action

Cristiano Castelfranchi

National Research Council - Institute of Psychology Group of"Artificial Intelligence, Cognitive Modelling and Social Simulation",

viale Marx, 15 - 00137 Roma, Italia, e-mail." [email protected]

(Received July 1998, aecept#d May 1999)

Abstract Thesis'. Macro-level social phenomena are implemented through the (social) actions and minds of the individuals. Without an explicit theory of the agents' minds that founds agents' behavior we cannot understand macro-level social phenomena, and in particular how they work. AntiThesis: Mind is not enough: the theory of individual (social) mind and action is not enough to explain several macro-level social phenomena. First, there are pre-cognitive, objective social structures that constrain the actions of the agents," second, there are emergent, unaware or non-contractual forms of cooperation, organisation, and intelligence. Synthesis': The real challenge is' how to reconcile cognition with emergence, intention and deliberation with unknown or unplanned social functions and "social order". Both objective structures' and unplanned self-organising complex forms of social order and social function emerge from the interactions of agents and from their individual mental states; both these structures and self-organising systems feedback on agents' behaviors through the agents' individual minds.

Keywords deduction," induction; mental models'

I. Cogni t ive mediators of social p h e n o m e n a

The most important fact concerning human interactions is that these events are psychologically represented in each of the participants (Kurt Lewin, 1935)

Thesis: Macro-level social phenomena are implemented through the (social) actions and minds of the individuals. Without an explicit theory o f the agents' minds that founds agents'behavior we cannot understand and explain macro-level social phenomena, and in particular how they work.

109

Page 2: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

We will apply this to: social cooperation I and its forms, team work and organisation, social values, social norms, social functions.

It is necessary to clarify this statement, and also Lewin's claim that might be misleading.

1.1. Cognitive agents and representation-driven behaviors

Let us first remind some crucial feature of cognitive agents 2 and of their actions: i. Cognitive agents have representations of the world, of the actions' effects, of

themselves, and of other agents. Beliefs (the agent explicit knowledge), theories (coherent and explanatory sets of beliefs), expectations, goals, plans, and intentions, are relevant examples of these representations.

ii. The agents act on the basis of their representations. More precisely they act on the basis of:

- their beliefs about the current state of the world, and about their abilities, resources, and constraints;

their expectations about the effects of their possible actions, and about possible future events (including the actions o f other agents);

- their evaluations about what is good and what is bad, and about situations, agents, objects;

- their goals and preferences; - the plans they know ("know how") for these goals. iii. In other words, those representations are not just reflections about the action,

or an epiphenomenon without any causal impact on the agents' behavior; they play a crucial causal role: the action is caused and guided by those representations. The behavior of cognitive agents is a teleonomic phenomenon, directed toward a given result which is pre-represented, anticipated in the agent's mind (that is why we call it "action" and not simply "behavior").

iv. The success (or failure) of their actions depends on the adequacy of their limited knowledge and on their rational decisions, but it also depends on the objective conditions, relations, and resources, and on unpredicted events.

These properties of the micro-level entities and of their actions have important consequences at the macro-level and for the emergence process.

In this paper we use the term "cooperation" either in a broad sense -like here- or in a narrow sense. In its narrow sense it refers to actions that are complementary in a common plan, where the agents are mutually dependent on each other for a common goal (Conte & Castelfranchi, 1995) (the "cooperative" move in the Prisoner Dilemma, for example, is not "cooperative" in this sense). 2 Cognitive agents are agents whose actions are internally regulated by goals (goal-governed) and whose goals, decisions, and plans are based on beliefs. Both goals and beliefs are cognitive representations that can be internally generated, manipulated, and subject to inferences and reasoning. Since a cognitive agent may have more than one goal active in the same situation, it must have some form of choice/decision, based on some "reason" i.e., on some belief and evaluation. Notice that we use "goal" as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions, etc.

110

Page 3: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

1.2. The hyper-cognitive view

Given the ability o f cognitive agents to have representations o f others' mind, the social world and their interactions, a wrong interpretation o f the initial thesis can follow. To claim that social action and functioning at the macro-level is implemented in and works through the individual minds o f the agents is not the same as claiming that this macro-social functioning is reflected in the minds o f the agents, is represented in it, known, and deliberately or contractually constructed. A large part o f the macro-social phenomena works thanks to the agents' mental representations but without being mentally represented 3. How is this possible?

"Cognitive mediators" o f social action or "mental counterparts" o f social phenomena (like norms, values, functions, etc.) are not necessarily synonym o f "cognitive representation" and awareness o f them.

We call (Conte & Castelfranchi, 1995) hyper-cognitive view 4 and subjectivism the reduction o f social structures, social roles and organisation, social cooperation, to the beliefs, the intentions, the shared and mutual knowledge and the commitment o f the agents. Agents are modelled as having in their minds the representations o f their social links. These links seem to hold precisely by virtue o f the fact that they are known or intended (subjectivism): any social phenomenon (be it global cooperation, the group, or an organization) is represented in the agents' minds and consists o f such representations (ex. Bond, 1989; Bond & Gasser, 1988; Gasser, 1991).

3 Consider for example the fact that in several forms of cooperation (industry, bureaucracy, etc.) the involved agents ignore their partners (people executing other parts of the large cooperative plan) and the entire plan or its aims. Or consider social functions (the functions of family, of the division of labour, of prejudices and reputations, of social norms, and so on) which are satisfied thanks to individual and group behaviors based on individual beliefs and goals, but without being intended or known by these agents (see later). 4 Undoubtedly, a necessary agent revolution has been accomplished by some thinkers (for instance, phenomenologists, such as Schutz, ethnomethodologists such as Garfinkel, Schegloff and Cicourel, critical thinkers, such as Habermas and Ricoeur, and hermeneutic philosophers, such as Gadamer, and so forth). These authors have had the great merit of having restored the social world to the agents. However, many of these authors view the social agent as fundamentally involved in the understanding and interpreting of the macro-social structures, up to the point that the social structure itself is believed to be accessible only through the mental representations of the agents. It is also right and proper to say that there are other important literatures on cooperation, coordination, social interactions and conflicts, in Computer Supported Cooperative Work - with an interesting anthropological component; in coordination science (Malone, 1987 and 1988); in organisation science with neo-utilitarian and natural decision making approaches (Shapira, 1997). However, we decide to remain within the current debate between AI and philosophy, for three reasons. First, they are trying to provide a formal account of the mental representations and reasoning of the agents; second, it is domain where economic and game- theoretic interpretation of social action is not yet dominating; it is a domain where the clash between the two competing paradigms (symbolic cognition and emergence/complexity) is more clear and cannot be solved just verbally, and -to be true- is about these two paradigms that we are talking here.

111

Page 4: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

1.3. Individual mind and social cooperation: '~]oint activity" and "team work"

We cannot understand and explain collaboration (Grosz, 1996), cooperation (Tuomela, 1993; Tuomela & Miller, 1988; Conte & Castelfranchi, 1995; Jennings, 1993), teamwork without explicitly modelling the beliefs, intentions, plans, commitments of the involved agents.

Let us take the important analysis of teamwork by Cohen and Levesque (Levesque et al., 1990; Cohen & Levesque, 1991) as an example of the AI approach (and of its contradiction).

In Cohen & Levesque's (1991) terms, cooperation is accounted for in terms of joint intentions, x and y jointly intend to do some action if and only if it is mutually known between x and y that:

- they each intend that the collective action occur, - they each intend to do their share (as long as the other does it), - this mutual knowledge persists until it is mutually known that the activity is

over (successful, unachievable, etc.). Moreover, a team, a group, a social agent (Rao et al., 1992), etc. are defined in terms of Joint Persistent GoalsL

In our view, this approach (like the original analysis by Tuomela) shows that to model and formalize a team Cupertino it is necessary to model the minds of the involved agents: the beliefs of the agents about each other and the joint plan, and the commitments of the agents towards each other. More than this: we think that this approach is not sufficient to account for a group or a truly co-operative work because a much reacher representation of the social minds is needed (Conte & Castelfranchi, 1995). In fact in these models there is only a limited account of the individual mental states in Cupertino. First, one should explicitly model not only the beliefs about the intentions and the shares of the others, but also the goals about the actions and the intentions of the others (Grosz & Kraus, 1996): each member not only expects but wants that the others do their job. And conversely one should model the social commitment to the others also in terms of delegation of goals/task (Castefelfranchi & Falcone, 1998) and of compliance with the others' expectations: i.e. as goal-adoption (Castelfranchi, 1984; Castelfranchi, t995).

Second, in order to provide a good definition of teamwork (and to design an artificial agent who is able to co-operate) it is necessary to provide a theory of the agents' motives for participating in a teamwork; of how Cupertino is formed from individual needs and desires; of which rewards one expects and obtains. In other

s A team of agents have a joint persistent goal relative to q to achieve p (a belief from which, intuitively, the goal originates) just in case: (l) they mutually believe that p is currently false; (2) they mutually know they all want p to eventually be true; (3) it is true (and mutually known) that until they come to believe either that p is true, or that p will never be true, or that q is fa!se, they will continue to mutually believe that they each have p as a weak achievement goal relative to q and with respect to the team.", where a weak achievement goal with respect to a team has been defined as "a goal that the status of p be mutually believed by all the team members (Cohen & Levesque, 1991).

112

Page 5: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

words, not only the direction of causation from Macro to micro should be accounted for, but also the way up. Not only the direction from the group to the individual (task allocation, etc.) should be studied, but also that from the individual to the group. We need definitions that imply the reasons why agents adopt (and hence share) others' goals. Motivations are part of the notion of group, or of Cupertino, or of joint activity, and allow for example exchange to be clearly distinguished from Cupertino: while in strict Cupertino agents intend to do their share to reach a common goal, and defeating is self-defeating, in exchange they have their private goals, are indifferent to the achievements of the others, and are leaning to cheat and to defeat. The cognitive capabilities required of the agents widely differ in the two conditions.

So, personal motivations and beliefs and social beliefs and goals (about the minds of the other agents), social commitments, expectations, must be modelled to understand deliberated forms of strict Cupertino, exchange, teamwork, organization.

Finally, this entails also a (mental) representations of obligations and norms (Conte & Castelfranchi, 1993) without which there is neither true agreement, nor social commitment; and without which the speech act theory itself is vacuous, since a fundamental aspect of speech acts is precisely the formation of obligations in both speaker and hearer (Castelfranchi, 1999).

Without representing the agents' minds we cannot distinguish between altruist and selfish acts, or between gifts and merchandise (Castelfranchi, 1984) or between exchange and coercion. We cannot predict the behavior of the agents in these very different social relations, for example how leaning is the agent to abandon its commitment without informing the othe?.

1.4. The mental counterparts of social values

Very relevant social constructs like social hierarchies and comparisons, social "value" of people (reputation, image, esteem), and social values are built with and thanks to a specific mental entity: evaluations. Let us - very shortly - characterise what kind of mental object is an evaluation, how it is reducible to its elementary components (beliefs and goals), how its works; and then characterise how social values are represented in the agents' minds and are related to evaluations and to choices, and influence action (for a well argued and extended analysis of this see Miceli & Castelfranchi, 1989, 1992 and forthcominf).

An evaluation is a kind of belief concerning the power (hence the usefulness) a given entity (be it an object, organism, agent, institution, etc.) or state of the world is endowed with in relation to a certain goal; evaluations are closely linked to goals:

6 Game theory which does not have a real theory of action and of social action because it lacks an explicit theory of the agents' goals (badly surrogated by the utility function), cannot deeply model cooperation and clearly disentangle different forms of it (Castelfranchi & Conte, 1998). 7 This section is in fact just a summary of these works.

113

Page 6: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

they are beliefs arising from goals, and that give rise to goals; evaluations play a crucial role both in cognition (problem solving) and in social interaction.

A value is a special kind of evaluation; while an evaluation implies a means-end relationship (an entity or state of the world is assumed to be "good" or "bad" for a goal p) a value is a "means" for an unspecified goal, or class of goals, and turns into something "good" in itself; values show an even closer relationship with goals, and in particular norms, by virtue of the absolute character of this special kind of evaluations; values serve important functions, both cognitive and social, which result from their being a borderline mental object, between evaluations and goals: as we shall see, values' kinship with absolute imperatives in fact favours the social function, while their cognitive functions are made possible by their evaluative features.

Let's see a bit more precisely these points.

Evaluations

We define an evaluation of an entity or event x as a belief of an evaluating agent e about x's usefulness with regard to a goal p. If for instance x is a pair of scissors and e believes - from direct experience, inference, or someone else's communication - that it is good for cutting a piece of cloth, in so doing e is evaluating the scissors with regard to that goal. We might represent an evaluation (more precisely a "positive" evaluation) as follows:

(BEL e (GOOD-FOR X p))

where x denotes an entity variable (i.e., an object of any kind: physical object, organism, agent, etc.), e is an agent variable and p is a well-formed formula representing a state of the world; the predicate (GOOD-FOR X p) means that x is a means fdr p, which is what e BELIEVE. GOOD-FOR has a very broad semantics: it merely expresses a very broad means-end relationship: i.e. that x is useful for making p true. x may either directly realise p, or cause p to be realised, or favour that p be realised.

Evaluations are a special kind of beliefs, characterised by a strict relationship with action, by virtue of their link with goals. Evaluations imply goals by definition, in that the latter are a necessary component of evaluations, namely, the second argument of the GOOD-FOR predicate. From a more "substantialist" perspective, evaluations imply goals in the sense that they originate from them: it is the existence of some goal p (either e's or someone else's) that makes the world good or bad, justifies and motivates both the search for a means x to achieve it, and the belief that x is (not) GOOD-FOR p. Goals and evaluations endow objects and people with "qualities" and "faults".

The relationship between evaluations and goals is even closer, because evaluations not only imply goals, but also can generate them. In fact, if e believes x is good for some goal, and e has that goal, e is also likely to want (possess, use) x. So there is a rule of "goal generation" which might be expressed as follows: if e

114

Page 7: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

believes something x to be a means for e's goal p, e comes to have the goal (use e x) of exploiting the means x.Being able to deliberate, that is, to choose an alternative on the grounds o f explicit evaluations concerning the "goodness" o f the various options, and being capable of reasoning aiming at supporting such judgments add further advantages to the mere fact of making choices on the bases o f some weight or "procedural preferznce". In these cases, in fact, the system can justify its choices, as well as modify the "values" at stake through reasoning. Moreover, it is liable to persuasion, that is, it can modify its preferences on the grounds o f the evaluations conveyed by others.

We interact with people on the basis of the image and trust we have of them, i.e. on the basis of our evaluations o f them: this establishes their "value" and reputation. And also social hierarchies are just the result of the evaluations that individuals and groups receive from each other. Notice that these hierarchies of values and importance can be either explicitly represented in the minds of some agents (for ex. in small groups) or just emerging from partial - and perhaps contradictory - hierarchies and comparative evaluations in the minds of the agents.

Values

We use value in a precise and strict sense (although accounting for the different but related meanings that this term has both in common sense and in the social sciences 8, see Miceli & Castelfranchi, 1989, 1992 and forthcoming). The mental counterpart o f social values is - in our view - a specific mental object strongly related to evaluations, standards for evaluations, and norms.

In everyday language, value has two distinct meanings, that are both related to the concept of evaluation. One meaning is "relative", that is the value of something for something else. This coincides with the result of an evaluation, or with the notion of means itself: i f x is considered as a (good or bad) means for p, it acquires a (positive or negative) "value" with regard to p. By contrast, the other meaning of value is "absolute": for instance, honesty does not have a value in view of something else; it is a value. It is precisely this absolute meaning what we are interested in here.

8 In the social sciences, "value" seems to traditionally play the role of a passe-partout concept, accounting for a variety of phenomena, from the intrapsychic to the social domain. Time after time, it has come to overlap a number of concepts, such as: valence (Pepper, 1958); goal (Koehler, 1938; Pepper 1958), in particular of the general and long-term kind (Cranach, Kalbermatten, Indermuhle & Gugler, 1982; Rokeach, 1974); need (Maslow, 1959); standard (Becker, 1968; Parsons, 1951); and norm (Rokeach, 1974; Williams, 1964). Sometimes, values are also conceived as a kind of belief The core of such views (often coexisting with one or more of the previous ones) is that values are "conceptions of the desirable" (Kluckhohn, 1951; Krech, Crutchfield & Ballachey, 1968; Rokeach, 1974; Williams, 1964), where the desirable is often kept distinct from the desired; in other words, values would concern what should be desired, rather than what is actually desired. By and large, we share the latter notion of values in terms of beliefs. But, before presenting our view in more detail, let us start from the meaning of value in everyday language, which in part accounts for the conceptual vagueness and overlapping we have just mentioned.

115

Page 8: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

When we say that x (say, honesty) is a (positive) value, we mean that it is good. But while in evaluating some entity x or state q as good, we are assuming it is good for something else, i.e. some goal p, here, on the contrary, we are considering x or q as good tout court, or, more precisely, we are leaving unspecified what honesty is good for.

Therefore a value, in its absolute meaning, can be represented as a "broken" evaluation which leaves its second argument unspecified (Miceli & Castelfranchi, 1989):

(BF.I. e (GOOD-FOR X ~ )

It should be stressed that the difference between values and evaluations is not a difference between "things good in themselves" and "things good for something else", respectively. The semantics of good always implies "for something". The difference between values and evaluations lies in the fact that values leave unspecified the "something" they are good for. Since the means-ends continuum is broken, values appear as absolute, with a number of important consequences both at the psychological and social level.

The notion of value we have suggested allows to distinguish it from related concepts which value is often made overlap with. t tere we will consider goals and norms, whose "kinship" with values is particularly close.

Values and terminal goals. We have just said that values mention "means" without specifying the "ends" of such means, and that is why they appear to mention "ends in themselves". In fact, values look very similar to a special kind of goals, i.e. terminal or top ones - those that, within the mind of the individual, are not represented as instrumental to other (higher order) goals, but as ends in themselves. However, this should not make infer that values coincide with terminal goals. Goals are regulatory mental states, while values are just beliefs (in particular, evaluative beliefs of a special kind). In other words, values have a different status in the agent's mind: they are judgements, however general and absolute, about the goodness (preciousness, usefulness) of something. Consider a possible terminal goal such as "making friends", or "avoiding loneliness". As a terminal goal, "making friends" is a mental state that regulates my behavior (and makes me look for people, go to parties, etc.) as an end in itself (i.e., not in view of some other superordinate goal). As a value, "having friends" is, first of all, the object of a belief of the kind "having friends is good". Since this is what I believe, I will also want to have friends, but as a consequence of my belief. That is, values are likely to generate goals 9.

'~ Not only values generate (terminal) goals, but also vice versa, goals may "generate" values. Let us consider again the terminal goal of "making friends", and suppose that I have got it as such, i.e., it is not derived from a value of mine. I just feel this need for belongingness and friendship. However, at this point I can construct a corresponding value, and come to believe that "making friends" is a good thing to do. This is an ad hoc and post hoc generation of a value, that is meant to justin, the existence of the terminal goal. its psychological interest is quite apparent: in our search for supports and

116

Page 9: Through the agents' minds: Cognitive mediators of social action

Castelffanchi - Through the Agents' Minds

So, terminal goals do not coincide with values. They m a y be a p r o d u c t of values. Though not every terminal goal stems from a value, it is still true that every value is likely to give rise to some terminal goal. Such a strict relationship between values and goals is necessary according to our model of evaluation, according to our rule o f goal generation.

Values a n d norms. What we have just said about the difference between values and goals also applies in the case of norms, when conceived of as prescr ip t ions , that is a special kind of goals (socially shared, set on individuals in view of some common "good" etc.). While norms, like goals, regulate behavior, values are a kind of evaluative beliefs. So, while "be reliable" would be a norm, "reliability is good" is a value1~

Our analysis allows to account for a number of typical features of values, namely their being unfalsifiable, indefinite, normative, and terminal.

Values are unfals i f iable . I f I do not know why something is good (that is, what goal it serves), I cannot prove it is so or verify i f it is good or not. The goodness or badness of something is meaningful only if it can be translated into its instrumentality for something else. Values, by definition, do not provide information about their own instrumentality.

Values a re indef ini te . Generally, objects, events, and behaviors present some boundaries of application, beyond which they are useless, or even detrimental: the use of an entity x or a state of the world q can be good for p up to a certain point, or it can be good for p and at the same time bad for some other goal r. Undesired side- effects or conflicts among different coexisting goals generally establish the boundaries of application and use of entities, states, and events. However, the boundaries of application of values are hardly known. If I do not know what goals a certain x is good for, I will not know how far x is good, and where or when it can come into conflict with other goals and values.

The unfalsifiable and indefinite character of values places them in the realm of so-called "irrationality", which is crowded with unquestionable and fuzzy entities. In particular, the indefinite nature of values accounts for the fact that people are quite likely to harbour incompatible or conflicting values (e.g., "career" vs. "family life"), without either realising the existence of the conflict or at least being able to solve it (Maclntyre, 1981). In fact, when a conflict is identified (because, for

justifications for our needs and choices, we are likely to invert, so to say, the "natural" (or better, rational) order of the (mental) events; since what is good should be pursued, we want to believe that what we actually pursue is good. An old tradition of thought, starting from Spinoza, shares in fact this view of values: we regard as good, and transform into values, what we like and desire. ~0 While the normative character of a goal generated by an evaluation is, so to say, conditional: if, and as long as, p is a goal, q should be a goal. By contrast, values are more cogently normative, in that the goals they produce are unconditioned: since q is given as good tout court (and not for some p), it should be a goal tout court. So, the terminal goal generated by a value in fact turns into a norm. Such a close kinship between values and norms accounts for the common overlapping of the two concepts. And that is also why values are (rightly) seen as "conceptions of the desirable", as opposed to the actually "desired".

117

Page 10: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

instance, one registers that two values cause conflicting side-effects), it can be very hard to solve it, because, not knowing which goals are served by the conflicting values, one cannot know what to prefer and choose.

Values are normat ive . As already shown, if x or q is "good", it should be wanted, according to a rule of goal generation which, when applied to values, acquires a special normative character. It is worth observing that, if compared with actual norms, values present some advantages, in that they show a greater persuasive power. In fact, often it results more effective to convey norms through values, rather than to express them directly. Values appear more "acceptable" than norms for a number of reasons. First, values do not show the arrogant and impositive character that is typical of prescriptions and commands. They do not say, "Do this!", but just express some judgement about the goodness or badness of something. Second, values justify the prescriptions they implicitly convey, that is, they provide some reason (i.e. x's or q's "goodness"), however absolute and vague, for complying with the implicit prescription. Third, values also involve the conveyer: when conveying a value, one is also showing himself or herself to share that value, i.e. to share the judgement of goodness or badness, and therefore also the prescription it implies.

Values are terminal . While the goals generated by evaluations are always instrumental, in that they are relativized to the goals implied in the evaluations, the goals and norms generated by values are always terminal, i.e., ends in themselves. If I do not know which are the goals x or q is good for, I will look for it tout court, not in view of something else.

The normative and terminal nature of values accounts for their dogmatic character: values appear as unquestionable assumptions, that produce absolute imperatives. It is quite understandable that an individual's values are in fact associated with or traced back to his or her relationships with "authority" and "significant others". Also the particular importance people attach to their values is quite understandable. Values can hardly be disregarded or neglected. The norms they convey cannot be violated, unless under pain of serious feelings of guilt.

In conclusion, first, several typical features of values - already recognised by the social sciences - derive from their cognitive properties; second, similarly to norms and differently from functions, social values (or part of social values) are explicitly represented in the agents' minds and work thanks to these "conceptions"; although the agents remain unaware of the functions and of the origin of their values.

1.5. Norms as mental objects and the need for their recognition as norms

A norm N emerges as a norm only when it emerges as a norm into the mind of the involved agents; not only through their mind (like in approaches based on imitation or behavioral conformity, ex. Bicchieri, 1990). Not only without some mental counterpart of social norms we could not explain how they succeed in

118

Page 11: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

regulating the agents' behaviors, i.e. in producing intentions, but, this mental counterpart is the acknowledgement and the adoption of the norm N itself. N works as a N only when the agents recognise it as a N, use it as a N, "conceive" it as a N (Conte & Castelfranchi, 1995).

Norm emergence and formation implies "cognitive emergence" (and then cognitive agents): a social N is really a N after its Cognitive Emergence (CE) (see 2.3).

As far as the agents interpret the normative behavior of the group merely as a statistical "norm", and comply by imitation, the real normative character of the N remains unacknowledged, and the efficacy of such "misunderstood N" is quite limited. Only when the normative (which implies "prescriptive" ) character of the N becomes acknowledged by the agent the N starts to operate efficaciously as a N through the true normative behavior of that agent. Thus the effective "cognitive emergence" of N in the agent's mind is a precondition for the social emergence of the N in the group, for its efficacy and complete functioning as a N.

Notice that this cg is partial: for their working it is not necessary that social Ns as a macro-phenomenon be completely understood and transparent to the agents. What is necessary (and sufficient) is that the agents recognise the prescriptive and anonymous character of the N; the entitled authority, and the implicit pretence of the N to protect or enforce some group-interest (which may be against particular interests). It is not necessary that the involved agents (for ex. the addressee or the controller) understand or agree about the specific function or purpose of that N. They should respect it because it is a N (or, sub-ideally, thanks to surveillance and sanctions), but in any case because they understand that it is a N, and do not mix it up with a diffused habit or a personal order or expectation. Norms, to work as norms, cannot remain unconscious to the addressee, but the agent can remain absolutely ignorant of the emerging effects of the prescribed behavior in many kinds of Norm-adoption (Conte & Castelfranchi, 1995) ~1. Normative behavior has to be intentional and conscious: it has to be based on knowledge of the norm (prescription), but this does not necessarily imply consciousness and intentionality relative to all the functions of the norm (Castelfranchi, 1997).

2. Mind is not enough: objective social structures and emergent forms of co- operation

AntiThesis: The theory of individual (social) mind and action is not enough to understand and explain several macro-level social phenomena. First, there are pre- cognitive, objective social structures that constrain the actions o f the agents independently o f their awareness or decision; second, there are emergent, unaware ~2 or non-contractual forms of cooperation, organisation, and intelligence.

~1 In some forms of Norm Adoption, at least some of the functions of the Norm are conscious and pursued by the agent (Conte & Castelfranchi, 1993 and 1995). 12 "Unaware" in the sense of unknown, not understood, nor explicitly represented; not in the sense of "unconscious" as opposed to "conscious".

119

Page 12: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

We will apply this to: interference and dependence relations, interests, and social functions.

2.1. Objective social structures

Some social structures are deliberately constructed by the agents through explicit or implicit negotiation (at least partially; for example role structures in organisations); others are emerging in an objective way.

Let us focus in particular on one structure: the network o f interdependencies, not only because it is the more basic one for social theory, but also because it is emerging before and beyond any social action, contract, and decision o f the involved agents.

An emergent objective structure. The dependence network

There is "interference" (either positive or negative) between two agents i f the effects o f the actions o f the former can affect (favour or damage) the goals/outcomes o f the other. There is "dependence" when an agent needs an action or a resource o f the other agent to fulfil one (or more) o f its goals.

The structure o f interference and interdependence among a population o f agents is an emergent and objective one, independent o f the agents' awareness and decisions, but it constrains the agents' actions by determining their success and eff icacy ~3.

Given a group o f agents in a common world, and given their goals and their different and limited abilities and resources, they are interdependent on each other: a dependence structure emerges. In fact, given agent A with its goal Ga, and its plan Pa for Ga, and given the fact that this plan requires actions al and a2 and resource r l , i f agent A is able to do al and a2 and owns resource r l , we say that it is self-sufficient relative to Ga and Pa; when on the contrary A either is not able to perform for ex. a l , or cannot access r l (then it does not have the power o f achieving Ga by itself) while there is another agent B which is able to do al or possesses r l , we say that A is depending on B as for al or r l for the goal Ga and the plan Pa. A is objectively depending on B (even i f it ignores this or does not

13 This relational structure is not the output of previous agreements or contracts between the agents. It can be simply the result of their practical interference and dependence due to their limited and different abilities and resources and to a common environment. How can this structure of objective relationships influence the behavior of cognitive intentional agents without being known to them? Of course, the agents awareness of these relationships - based on experience, reasoning and communication - is an important root of its influence on their behavior (2.2) (although, in this case, what is influent are more the subjective beliefs about these relationships than the objective structure alone). However, it exist an influence due to the objective structure per se although ignored. This is due to the failures of the attempts based on such an ignorance or on wrong assumptions: I assume that Y will be interested in helping me, while it is completely autonomous or unable; etc. And to the success of actions that (accidentally, from the subjective point of view) correspond to the dependence structure. This structure determines also the real power and "value" of an agent, and of its abilities and resources, within a given population or "market" independently of its awareness of this.

120

Page 13: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

want this): actually it cannot achieve Ga i fB does not perform al or does not make rl accessible (Castelfranchi et al., 1992).

There are several typical dependence patterns like for instance the OR- Dependence, a disjunctive composition of dependence relations, and the AND-

dependence, a conjunction of dependence relations. To give a flavour of those distinctions let us just detail the case of a two-way dependence between agents (bilateral dependence). There are two possible kinds of bilateral dependence:

1) Mutual dependence, which occurs when x and y depend on each other for realising a common goal p, which can be achieved by means of a plan including at least two different acts such that x is depending on y's doing ay, and y is depending on x's doing ax.

Cooperation is a function of mutual dependence: in cooperation, in the strict sense, agents depend on one another to achieve one and the same goal ( C o n t e & Castelfranchi, 1995); they are co-interested in the convergent result of the common activity.

2) Reciprocal dependence, which occurs when x and y depend on each other for realising different goals, that is, when x is depending on y for realising x's goal that p, while y is depending on x for realising y's goal that q, with p - q.

Reciprocal dependence is to social exchange what mutual dependence is to co- operation.

The dependence network determines and predicts partnerships and coalitions formation, competition, cooperation, exchange, functional structure in organisations, rational and effective communication, and negotiation power, and there is simulation- based evidence of this (Castelfranchi & Conte, 1996). Notice that this emerging structure is very dynamic: by simply introducing a new agent or eliminating one agent, or simply changing some goal or some plan or some ability of one agent, the entire network could change. Moreover, after the feedback of the network itself on the agent mind (2.2), and the consequent dropping of some goal or the adoption of new goals, the dependence relations change.

When A is depending on B for its goal Ga, B gets an (objective) power over A as for Ga. This power over - which is the power to consent or prevent the achievement of A's goal, the power of giving A positive or negative reinforcements and incentives, the power to punish or to reward A - once known, is the most important basis for B's power of influencing A (Castelfranchi, 1990).

2.2. Cognitive Emergence of objective relations and its effect

When the micro-units of emerging dynamic processes are cognitive agents, a very important and unique phenomenon anses: Cognitive Emergence (Conte & Castelfranchi, 1995 and forthcoming).

121

Page 14: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

There is "cognitive emergence" 14 when agents become aware, through a given "conceptualisation", o f a certain "objective" pre-cognitive (unknown and non deliberated) phenomenon that is influencing their results and outcomes, and then, indirectly, their actions. CE is a feedback effect o f the emergent phenomenon on its ground elements (the agents): the emergent phenomenon changes their representations in a special way: it is (partially) represented in their minds. The "cognitive emergence" (through experience and learning, or through communication) o f such "objective" relations, strongly changes the social situation (Figure 1): relations o f competit ion/aggression or exploitation can rise from known interference; power over relations, goals o f influencing, possible exchanges or cooperation, will rise from acknowledged dependence.

I

new social relations

emergence

)f

relations or

al behaviour &

Figure 1. The levels of emergence and their feedback

In other words with CE part o f the macro-level expression, o f the emerging structures, relations, and institutions, or compound effects

~a For a broader view see (Castelfranchi, 1998) where it is claimed that emergence and cognition are not incompatible: they are not two alternative approaches to intelligence and cooperation, i.e. two competitive paradigms, and that they must be reconciled: - first, considering cognition itself as a level of emergence: both as an emergence from sub-symbolic to symbolic (symbol grounding, emergent symbolic computation), and as a transition from objective to subjective representation (awareness) - like in our example of dependence relations - and from implicit to explicit knowledge; - second, recognizing the necessity of going beyond cognition, modeling emergent unaware, functional social phenomena (ex. unaware cooperation, non-orchestrated problem solving) also among cognitive and planning agents. In fact, mind is not enough for modeling cooperation and society. We have to explain how collective phenomena emerge from individual action and intelligence, and how a collaborative plan can be only partially represented in the minds of the participants, and some part represented in no mind at all.

122

Page 15: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

- are explicitly represented in the micro-agents minds, are partially understood, known by (part of) them;

- there are opinions and theories about it; - there might be goals and plans about it, and even a deliberated construction of

it (either centralised or distributed and co-operative).

From subjective dependence to social goals, from "power over" to "influencing power"

The pre-cognitive structure illustrated in 2.1 can "cognitively emerge": i.e. part of these constraints can become known. The agents, in fact, may have beliefs about their dependence and power relations.

Either through this "understanding" (CE) or through blind learning (based for example on reinforcement), the objective emergent structure of interdependencies feedback into the agents' minds, and changes them (Figure 1). Some goals or plans will be abandoned as impossible, others will be activated or pursued (Sichman, 1996). Moreover, new goals and intentions will rise, especially social goals: the goal of exploiting some action of the other; the goal of blocking or aggressing against another, or helping it; the goal of influencing another to do or not to do something; the goal of changing dependence relations. For example, if A understands that for achieving its goal Ga it is depending on B performing al, it will derive the new (social) goal that B performs al, and it will try to induce B to do this. Agent A may even have the goal of creating a new dependence relation (making B dependent on itself) in order to get some power over B and induce it to do al. So, dependence relations not only spontaneously and unconsciously emerge and can be understood (CE), but they can even be planned and intended (CE).

Analogously, when B becomes aware of its "power over" A, it will have the goal of using this power in order to influence A to do or not to do something: influencing power. It might for example promise A to do al, or threaten A of not doing al , in order to obtain something from A (Castelfranchi, 1990).

Without the emergence of this self-organising (undecided and non-contractual) objective structure, and usually without its CE, social goals would never evolve or be derived.

2.3. Social cooperation does not always need agents' understanding, agreement, rational and joint planning

Unlike what it is claimed by Bond & Gasser (1988, 1989 and 1991) social relations and organisations are not held or created by commitments (mutual or social) of the individuals. Most social relations, part of the social structures pre- exist the interactions and commitments of the individuals. Agents find themselves in a network of relations (dependence, competition, power, interests, etc.) that are independent of their awareness and choice.

Social cooperation does not always need the agents' understanding, agreement, contracts, rational planning, collective decisions (Macy, 1998). There are forms of

123

Page 16: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

cooperation that are deliberated and contractual (like a company, a team, an organised strike), and other forms of cooperation that are emergent: non contractual and even unaware. Modelling those forms is very important. Our claim (Castelfranchi, 1997; Castelfranchi &Conte , 1992) is that it is important to model them not just among sub-cognitive agents ~5 (Steels, 1980; Mataric, 1992), but also among cognitive and planning agents whose behavior is regulated by anticipatory representations. In fact, also these agents cannot understand, predict, and control all the global and compound effects of their actions at the collective level. Some of these effects are self-reinforcing and self-organising.

Thus, there are important forms of cooperation which do not require joint intention, shared plans, mutual awareness among the co-operating agents. The co- operative plan, where the sub-plans represented in the mind of each participant and their actions are "complementary", is not represented in their minds.

This is the case of hetero-directed or orchestrated cooperation where only a boss' mind conceives and knows the plan, while the involved agents m a y even ignore the existence of each other and of a global plan; and perhaps even the boss does not know the entire plan, since some part has been developed by the delegated agents (Castelfranchi & Conte, 1992).

This is also the case of functional self-organising forms of social cooperation (like the technical division of labour) where no mind at all conceives or knows the emerging plan and organisation. Each agent is simply interested in its own local goal, interest and plan; nobody directly takes care of the task distribution, of the global plan and equilibrium.

3. Towards a bridge between cognition and emergence, intention and function, autonomous goal-governed agents and goal-oriented social systems.

Synthesis: The real challenge is how to reconcile cognition with emergence (Gilbert, 1995), intention and deliberation with unknown or unplanned social fi~nctions and "social order". Both objective structures and unplanned self- organising complex Jorms of social order and social fimction emerge from the interactions of agents in a common world and from their individual mental states; both these structures and self-organising systems feedback on agents' behaviors through the agents' individual minds either by their understanding (part oJ) the collective situation (cognitive emergence) or by constraining and conditioning agent goals and decisions. These feedback (from macro-emergent structures~systems) either reinforce or change the individual social behavior producing either the dynamics or the self-reproduction of the macro-systenz.

We will attempt to sketch some bridge-theories between micro and macro:

~5 By "sub-cognitive" agents I mean agents whose behavior is not regulated by an internal explicit representation of its purpose and by explicit beliefs. Sub-cognitive agents are for example simple neural-net agents, or mere reactive agents.

124

Page 17: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

a theory of the relationship between external and internal goals governed systems;

- a theory of cognitive and motivational autonomy; - a theory of social functions, which presupposes in turn: - a theory of unintended expected effects; - a theory of cognitive reinforcement learning in intentional agents.

in goal-

3.1. External goals on goal-oriented agents

As said at the beginning, a social system works thanks to the behaviors of its members, and then through their goals and their capacity of pursuing them on the basis of their beliefs. From this, several questions can be raised ~6 .

How do social systems regulate the behaviors of their members? How do these behaviors happen to respond to the goals of the social system? What is the origin of the social system's goals? What is in other words the relationship existing between the social system's goals and the goals internal to its members, which directly and actually regulate the latter's actions? Are the members able to understand and represent explicitly in their minds the social system's goals? Or are the goals of the social system simply a projection or promotion of the goals of (some of) its members? Or, do the members' goals and plans happily coincide with those of the social system? We believe that both these solutions are neither necessary nor sufficient.

This section is devoted to sketch a theory of these relationships, that we consider a crucial bridge theory between micro and macro.

Our claim is that there may be goals that are external to a given finalistic system and that determine its structural or functional characteristics from without, and in varying ways (Castelfranchi, 1982). These, which we will call external goals, can be imposed upon inert objects, determining their use, destination, or function. They may be also placed on goal-governed systems of varying levels of complexity (a boiler-thermostat, a horse, a child, a traffic policeman and any other role player). Moreover we claim that an analogous relation exists between the internal goals of. a goal-governed agent and the biological or social finalities its behavior responds to. So, the general problem is that of the relationships between the intrapsychic and the extrapsychic finalistic, teleonomic notions (Mayr, 1982).

The basic unifying questions are as follows: (a) Many features, behaviors, and goals of micro-systems serve to and derive

from an external pressure, request, advantage or need. These requirements may be either imposed on those systems by some designer, educator, authority; or may not be imposed by anyone, but simply result from an adaptive pressure or a social practice. But how can agents' features and goals be derived from external requirements and pressures?

i~, This section is basically an abriged version of Conte& Castelfranchi (1995, ch. 8), and of Castelfranchi (1992).

125

Page 18: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

(b) Many natural and social behaviors exhibit a teleological character. Nevertheless, they could not be defined as goal-governed: we neither want to attribute represented goals - e.g. intentions - to all kinds of animals; nor consider the functional effects of social action (like technical division of labour) as necessarily deliberate; nor attribute a mind to Society as a whole. Is there a concept that accounts for the teleological character of (social) behavior without postulating internal goals?

Goal-oriented and Goal-governed systems

There are two basic types of system with finalistic (teleonomic) behavior: Goal oriented systems (Mc Farland, 1983) which are systems whose behavior is

finalistic, aimed at realising a given result, that is not understood or explicitly represented (as an anticipatory representation) within the system controlling the behavior.

A typical sub-type of these are Mere Goal-oriented systems which are rule- based (production rules or classifiers) or reflex, or releaser, or association-based: they react to a given circumstance with a given adaptive behavior (tanks either to learning or selection).

Goal-governed systems are anticipatory systems. We call goal-governed a system or behavior that is controlled and regulated purposively by a goal internally represented, a "set-point" or "goal-state" (see Rosenblueth & Wiener, 1968; Rosenblueth et al., 1968). The simplest example is a boiler-thermostat system. As we will show, a "goal-governed" system responds to external goals through its internal goals.

It is crucial to stress that merely goal-oriented systems and goal-governed systems are mutually exclusive classes, but that goal-governed systems can be also goal-oriented. Goal-government can be not complete. It implements and improves goal-orientedness, but it does not (completely) replace the latter: it does not make the latter redundant (contrary to Elster's claim that intentional behavior excludes functional behavior).

Goal-government (by explicitly represented goals) is a way to guarantee and to serve external adaptive functions. In fact, not only the behavior is functional or adaptive (selected) but obviously also the mechanisms selected to produce and control that behavior: goals included! Thus internal explicit goals may be instrumental to external (non represented) functions: in this case the goal-governed apparatus is part of a more global goal-oriented behavior.

Consider for example those anthropological cultures that ignored the relation between making sex and making children. For sure reproduction remains a function of the mating behavior and of the sexual goal (sex is instrumental to this), however within the mind of an agent such a (biological) function, being not understood and known, does not directly control the behavior. Relative to the goal of making sex the sexual behavior is goal-governed (intentional), but relative to the higher goal of making children that behavior is simply goal-oriented (like for ex. a simple reflex), and the goal-governed mechanism is a way of implementing such a goal-oriented

126

Page 19: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

behavior (consider that also in our culture we are aware of the relation between sex and reproduction but our intention frequently enough ignores or is against this function).

Current goal-governed models (for example planning agents in AI) or goal- driven agents in psychology (Cranach, 1982) still seem limited. In particular, they focus mainly on the self-regulation of the various systems. They always define a goal in terms' of something internal to the system that regulates the system's behavior. They ignore the fact that there may be goals that are externally impinging on the system and that determine such a system from without, and in varying ways.

Let us first examine goals that are external to a system, but are also internal to another system. We begin by analysing the relation between external and internal goals in simple non-human goal-governed systems; a boiler with a thermostat, and a horse. We try to show that external goals may be imposed on people as well, and examine the case of a mother-child relation relative to the goal "child brushes his teeth". We will not discuss here the relation between a role player and the role- goals.

Once the concept of external goal has been introduced as explicitly represented in some mind, we use it as a bridge to reach a more radical unification of the concept of goal and all functional concepts up to and embracing biological (and later social) functions. In substance, we will assume that there may be goals external to a goal-governed system that are not internal to any other's (i.e. goal that are simply external). We call these goals "finalities" or "functions". This of course requires a reformulation of the very concept of goal (Castelfranchi, 1982).

The notion o f "external goal": from mind to mind ~7

When we speak of an external goal "from mind to mind" we will refer to a goal- governed system x whose goals are internal regulatory states governing its actions, and look at the effects that the existence of such regulatory states within x have on goal-governed external systems.

One of the relationships that comes about between system x and another system y, as a result of x's regulatory state gx, is the emergence of an external goal placed on y. Let us suppose that a goal of system x mentions an entity y. Suppose y's lot is somehow influenced or determined not only by chance but by the fact that it is

~7 In order to stay within the limits of this definition we shall use a narrow notion of external goal, leaving aside biological goals or adaptive functions' and, in general, what we shall subsequently call, finalities. For the moment, we will relinquish one of the great conceptual advantages that the term "goal" and its synonyms (and their equivalents in other languages) provide, that is, the effective unification of the two categories. In fact, the concept of external goal may appear incompatible with this definition, or at least peculiar. We could synthesise the current definitions of goal shared within the cognitive sciences as follows: a goal is a representation of a world state within a system, that regulates the behavior of the system, selecting and monitoring its actions, trying to adapt the world to that representation.

127

Page 20: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

mentioned in one of x's goals. In this case, we say that y has an external goal, or that x has placed an external goal on y.

External goals on goal-governed systems

A thermostat-boiler. Suppose y is a thermostat-boiler system, the simplest kind of goal-governed system, in which the internal representation of the goal is just a cybernetic set-point; x's goal is that the temperature in the house where he resides reaches 25~ to this end, he turns up the thermostat to 25~ x has a goal that must be reached by recurring to another system's internal goal and capacity to achieve it. Note that the boiler's and x' s goals are not one and the same, even if reaching the former permits the latter. In fact, x' s goal is to feel warm, or, we might say, that the house be warm (at 25~ while the goal of the thermostat-boiler is that the index that varies with the actual room temperature (thermometer) coincide with the index showing the objective desired. Moreover, this goal is always the same no matter what temperature x selects.

System y has an external goal (warm house), which corresponds to the x' s goal, furthermore, y has an internal goal (coinciding indexes), and there is a particular relationship between them, in that the internal goal is a sub-goal, a means for the external one (Figure 2).

Figure 2. External goals on a boiler

We call "respondent internal goal" an internal goal of system y (that is not identical to this external goal), by means of which y is able to respond to the external goal placed on it by another system.

A horse. Let us now examine the case where y is a rather more complex goal- governed system, for example a horse. Suppose this horse has the destination (and

128

Page 21: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

it would not be wrong to say the function) of a mount. To have this use value, that is, to be adequate for this kind of external goal, the horse must be broken and trained. But a horse is also an animal and, as such, it is a system that follows its goals autonomously, on its own initiative. To become a mount it needs_to learn to make (some of) its internal (that is, its own) goals respondent or identical to the external ones placed on it by x (Figure 3).

In fact, the horse breaking and training consist of exactly this: learning to obey x' s commands and to tolerate him on its back. To obey really means to give oneself an internal goal, equal or respondent to the external one to allow the latter to be achieved.

EQUINE MIND of B

Figure 3. External goals on a horse

A child. Consider a mother and her child. The mother wants her child to brush his teeth every evening, in order to avoid decay. The child does so in order to obey his mother and to make her happy; he ignores and couldn't understand the real function of his behavior (the higher goals in the mother's mind). What, relative to the intentional behavior and the mind of the child, is just an external goal and a function (see later), is an intended goal in the mother's mind.

Exactly the same kind of relation often holds between government and citizens (Castelfranchi, 1990). Government pushes citizens to do something it considers necessary for the public utility, for some common interest; but it asks the citizens to do this by using rewards or sanctions. It does not rely on the citizens' understanding of the ultimate functions of their behaviors, and on their motivation for public welfare; it relies on the citizens' motivation for money or for avoiding punishment.

129

Page 22: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

From external to internal goals

How can the goals of x (external to y) be translated into goals within y's mind? Does y always adopt x's goals?

From all the examples examined, we can conclude that an external goal can be implemented or better translated into a goal-governed system in two different ways.

Figure 4. External goals from one mind to the other

(a) As a copy-goal: an internal goal identical to the external goal and derived from it. The external goal is explicitly represented within the mind. This mind may both be aware of the fact that its goal p is also an external goal (somebody's will, a norm, a biological function), or it may ignore this. We will call internalisation this type of translation. External goals may be internalised thanks to a number of different processes and mechanisms (goal-adoption, selection, training).

(b) As a respondent goal: an internal goal which is functional to and derived from an external goal, but not identical. The external goal is not represented within that mind, but, in a certain sense, it is implicit in it.

An external goal placed on a goal-governed system and referring not to a trait of this system but to its mental states, is a social goal : a goal of an agent mentioning the action of another agent (Castelfranchi, 1997):

(GOAL x (DO y act))

or better, since this formula could also cover external goals placed on merely goal-oriented behaviors (e.g. bacteria), (GOAL X (GOAL y (DO y act)).

An external goal placed on system y is a social goal of another system x when it mentions a mental state ofy's. In particular, an external goal implies an influencing goal, when it mentions an action, or better, a goal, of y's.

130

Page 23: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

We will not discuss here the uses and destinations of people by other people and higher level systems (group, organization) or people's functions in groups and organisation, i.e. their "roles".

Let us just say that also in these contexts our claim is the same: the role player achieves (responds to) his external goals by pursuing internal goals, that is, through some goal-governed actions.

Generally a series of sub-goals that y pursues to fulfil the function of her role are left up to her. This means that they are not merely copies of external goals. Once y has adopted the basic goals of the role, it is left up to her to reach them in a way appropriate to varying circumstances, that is, to formulate contingent sub- goals (autonomy).

3.2. "As-if" or implicit goals

Besides copy-goals and respondent ones, there is another relevant kind of "implementation" through which external goals succeed in controlling the behavior of goal-oriented systems. This is the case of what we might call "as-if goals". They are behavioral mechanisms (for example, a reactive device) that do not imply an explicitly represented goal. The system is oriented towards the production of certain results, as if it were regulated by an explicit internal goal; but, in fact, the goal is neither explicitly represented, nor has been planned, decided upon, or reasoned about.

An implicit goal means that there is an external goal towards which the system behavior is oriented, but neither this goal is in itself and directly represented within the mind, nor the system achieve it thanks to some correspondent internal goal. The system guarantees its achievement thanks to some mechanism, rule, procedure, not thanks to internally represented goals. We will shortly analyse two crucial examples of non-respondent implicit, that is "as if' goals: reflexes, and the principle of utility.

Reflexes as as-if goals

Let us consider the flight reflex, for instance the case of a bird that as soon as it recognises the silhouette of a falcon behind it, reacts with an immediate flight behavior. Or, the case of a robot furnished of sensors and reactive motor behaviors, that in the presence of a certain acid immediately reacts abandoning the room.

We are speaking of stimulus-response mechanisms, or, at least in the bird case, of releasers: a key-stimulus elicits a motor pattern which is more or less fixed. We can suppose that, in the case of the bird, the adaptive function that selected its flight behavior be that of avoiding predation. As for the robot, we may suppose that the designer' s intention was that of avoiding a massive contact with a disruptive acid, and therefore avoid disruption. The goal that both reactive systems pursue is avoidance of danger, or safety. It is "as if" the two systems were internally regulated by the goal of avoiding a specific danger and be safe. But, we know that in fact this goal is not represented explicitly in either system. These goals are

131

Page 24: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

merely the (non casual) outcome of the behavior activated, elicited by a given stimulus. As regards the robot, the true goal is represented within the designer's mind; as regards thebird, it is only a biological finality. In both examples, any true internal goal regulating, governing (through a feedback mechanism) a "purposive" behavior (Rosenblueth & Wiener, 1968) should be excluded. The usual goal- governed mechanism in which a set-point or regulatory state is matched with the current world state and an action modifying the current world state is activated (like in the TOTE model by Miller et al., 1960), is replaced by a simple match between the conditions of the "action" (like the left part of a production rule) and the world stimuli: this match (rather than a match between the current state and the "desired" one) can elicit the action.

Utility and profit as as-if goals

Reflexes like those seen above are low-level mechanisms that represent implicit goals. Also some higher-level mechanisms may seem to play the same role. Some meta-goals (goals about the internal explicit goals, as found in Wilensky, 1983) may work as as-if goals, apparently regulating the behavior, but in fact only indirectly impinging on it.

We believe that the goal of ensuring the best resource allocation, achieving the greatest number of most valuable goals with the minimum cost, is not a true internal goal in lay men. In our view, individuals act in view of concrete and specific goals (like being loved, eating, having a book published, making money), goals which are various and eterarchic: i.e., they are not means for a unique, totalitarian, top-level goal (pleasure, or utility).

Nevertheless, it is absolutely true that agents habitually choose the most convenient goal (given their limited cognitive capacities) to achieve in a bunch of goals possibly in conflict with one another. Our claim is that this result may be allowed not necessarily by a goal, but by a mere mechanism or procedure of rational balance and choice. Agents have but the implicit goal of choosing the most convenient goal. They operate "as if" they had such an internal goal. In fact, it is only an external goal that shaped this mechanism of choice.

3.3. Finalities as external goals

So far, we have considered a true goal as a state that is always represented in at least one goal-governed system, endowed with a series of controls and actions in order to achieve that state in the world. In doing so, we have been using a notion of goal that does not cover biological finalities (adaptive functions or philogenetic goals) and social functions. However, these notions are not unrelated. There must be a concept which provides a bridge between them.

Biological functions are certainly not goals in the above-mentioned sense: neither nature, nor species, nor selection nor any other analogous entity are goal- governed systems in our sense. However, we claim that finalities work on organisms in a Way that is analogous to external goals operating on objects or goal-

132

Page 25: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

governed systems, and what is needed is a theory of the translation of external into internal goals which is very close to that we developed for true goals (see Figure 5). We cannot here extensively discuss biological functions and their relations with internal goals of the organisms (see Castelfranchi, i982; Conte & Castelfranchi, 1995, ch. 8).

;) J external [ f u n c t i o n ]

Figure 5. Internal goals and their external functions

We also suggest that all the claims about biological functions also apply to social functions. We extensively discuss of social functions in 4. but let us clearly specify the analogy.

There is a genetic and explanatory link between external and internal goals; and there is a functional link; this is true for both biological and social functionalities. We mean that in the social case: the Macro-system's goals -which constitute its "functioning"- run through their implementation in the micro-system's internal goals. This implementation follows the general principles we just sketched.

This is the general, abstract nature of the relationship between social entities (norms, values, roles, functions, groups, structures, etc.) and their mental counterparts.

Either the social entity c~ is explicitly represented and considered (either at a conscious or at an unconscious level) within the agent mind, or it is implicit, not known, not represented as such: for producing the social entity oc it is sufficient the mental entity [3: oc works through [3.

133

Page 26: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

3.4. Autonomous gears? The theory of cognitive and motivational autonomy

IIow to use an autonomous intentional agent as a functional device in a social system?

How can a deliberative (intentional) agent be influenced and oriented by the functions, norms, requests of the macro-level impinging on it (so as to guarantee the role and the performances functionally needed by the macro-system) while maintaining at the same time its autonomy, motivations, self-interest?

The solution of this paradox is to be found precisely in the cognitive agent architecture, in its mind and in what it means to be self-interested or self-motivated although liable to social influence and control.

social entity social entity

Figure 6. The relationship between social entities and their mental counterparts

We claim that an agent is Socially Autonomous if: 1) it has its own goals: endogenous, not derived from other agents' will; 2) it is able to make decisions concerning multiple conflicting goals (being them

its own goals or also goals adopted from outside); 3) it adopts goals from outside, from other agents; it is liable to influencing; 4) it adopts other agents' goals as a consequence of a choice among them and

other goals; 5) it adopts other agents goals only if it sees the adoption as a way of enabling

itself to achieve some of its own goals (i.e. the Autonomous Agent is a self- Interested or self-motivated Agent);

6) it is not possible to directly modify the agent's goals from outside: any modification of its Goals must be achieved by modifying its beliefs;

Thus, the control over beliefs becomes a filter, an additional control over the adoption of goals;

7) it is impossible to change automatically the beliefs of an agent. The adoption of a belief is a special "decision" that the agent takes on the basis of many criteria.

This protects its Cognitive Autonomy (Castelfranchi, 1995).

134

Page 27: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

Let us" stress the importance of principle (5): an autonomous and rational agent makes someone else's goal its own (i.e., it adopts it) only if it believes it to be a means for achieving its own goals.

Notice that this postulate does not necessarily coincide with a "selfish" view of the agent. To be "self-interested" or "self-motivated" is not the same as being "selfish". The agent's "own" goals, for the purpose of which he decides to adopt certain aims of someone else, may include "benevolence" (liking, friendship, affection, love, compassion, etc.) or impulsive (reactive) behaviors/goals of the altruistic type. The child of our example adopts the mother's goal (that he brushes his teeth) to make her happy.

Of course the agent, although understanding and accepting the societal requests, norms or roles does not necessarily understand or accept all the societal plans or functions�9 Society delegates to the agent just sub-goals of its explicit or implicit plans. And very frequently it does not rely on the agent's "cooperation" (common goal and shared mind/plan) but on its self-interested adoption for private reasons.

4. Social Functions and Cognition

The case of social functions is very different from that of social values or norms. Of course, also functional behavior requires some cognitive counterpart or mediator, but in this case the external goal impinging on the behavior is not understood or explicitly represented as a goal: we just have an internal goal unconsciously serving the external function (see 3.1). In other words, the problematic issue in the theory of social functions is the relationship between social functions and intentions governing the functional behavior.

Elster (1982) is right when he claims that for a functional explanation to be valid �9 it's indeed necessary that a detailed analysis of the feedback mechanism is provided; in the huge majority of the cases this will imply the existence of some filtering mechanism thanks to which the advantaged agents are both able to understand how these consequences are caused, and have the power of maintaining the causal behavior. However he is wrong in concluding that this is just a complex form of causal/intentional explanation; it is meaningless to consider it as a "functional" explanation. Thus, functional explanation is in an unfortunate dilemma: either it is not a valid form of scientific explanation (it's arbitrary, vague, or tautological), or it is valid, but is not a specifically functional explanation" (Elster, 1982). In other terms, according to Elster a theory of social functions is either superfluous or impossible among intentional agents.

By contrast, the real point is precisely that we cannot build a correct theory of social functions without a good theory of mind and specifically of intentions discriminating intended from unintended (aware) effects, and without a good theory of associative and reinforcement learning on cognitive representations, and finally without top-down and not only a unilateral bottom-up (from micro to macro) view of the relationship between behavior and functions. We need a theory of cognitive

135

Page 28: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

mediators and counterparts of social functions. The aim of this section is to analyse this crucial relationship.

This relationship is so crucial for at least two reasons: a) on the one side, no theory of social functions is possible and tenable

without clearly solving this problem; b) on the other side, without a theory of emerging functions among cognitive

agents social behavior cannot be fully explained. In our view, current approaches to cognitive agent architectures (in terms of

beliefs and goals) allow for a solution of this problem; though perhaps we need some more treatment of emotions. One can explain quite precisely this relation between cognition and social functions' emergence and reproduction. In particular, functions install and maintain themselves parasitically to cognition: functions install and maintain themselves thanks to and through the agents' mental representations but not as mental representations: i.e. without being known or at least intended.

As we said, for a Social Norm to work as a Social Norm and be fully effective, agents should understand it as a Social Norm. On the contrary the effectiveness of a Social Function is independent of the agents' understanding of this function of their behavior:

a) the function can rise and maintain itself without the awareness of the agents; b) if the agents intend the results of their behavior, these would no more be

"social functions" of their behavior but just "intentions". So, we accept Elster's crucial objection to classical functional notions, but we think

that it is possible to reconcile intentional and functional behavior. With an evolutionary view of "functions" it is possible to argue that intentional actions can acquire unintended functional effects. Let us rephrase and develop Elster's problem as follows.

Since functions should not be what the observer likes or notices, but should be indeed observer-independent, and be based on self-organising and self-reproducing phenomena, "positivity" can just consist in this. Thus, we cannot exclude phenomena that could be bad, i.e. negative from the observer's point of view, from the involved agents' point of view, or for the OverSystem's point of view. We cannot exclude "negative functions" (Merton's "disfunctions") from the theory: perhaps the same mechanisms are responsible for both positive and negative functions.

If a system acts intentionally and on the basis of the evaluation of the effects relative to its internal goals, how is it possible that it reproduces bad habits thanks to their bad effects? and, even more crucial, if a behavior is reproduced thanks to its good effects, that are good relative to the goals of the agent (individual or collective) who reproduces them by acting intentionally, there is no room for "functions". If the agent appreciates the goodness of these effects and the action is replied in order to reproduce these effects, they are simply "intended". The notion of intention seems sufficient and invalids the notion of function.

136

Page 29: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

We argue that, to solve this problem, it is not sufficient to put deliberation and intentional action (with intended effects) together with some reactive or rule-based or associative layer behavior and let emerge from this layer some social unintended function, and let operate on this layer the feedback of the unintended reinforcing effects (van Parijs, 1982). The real issue is precisely the fact that the intentional actions of the agents give rise to functional, unknown collective phenomena (ex. the division of labour), not (only) their unintentional behaviors. How to build unknown functions and cooperation on top of intentional actions and intended effects? How is it possible that positive results -thanks to their advantages- reinforce and reproduce the actions of intentional agents, and self-organise and reproduce themselves, without becoming simple intentions (Elster, 1982)? This is the real theoretical challenge for reconciling emergence and cognition, intentional behavior and social functions, planning agents and unaware cooperation.

A possible solution to this problem is searching for a more complex form of reinforcement learning based not just on classifiers, rules, associations, etc. but on the cognitive representations governing the action, i.e. on beliefs and goals.

In this view "the consequences of the action, which may or may not have been consciously anticipated, then modify the probability that the action will be repeated next time the input conditions are met" (Macy, 1998). More precisely functions are just effects of the behavior of the agents, that go beyond the intended effects (i.e., they are not intended) and succeed in reproducing themselves because they reinforce the beliefs and the goals of the agents that caused that behavior. Then:

- First, behavior is goal-governed and reason-based; i.e. it is intentional action. The agent bases its goal-adoption, its preferences and decisions, and its actions on its beliefs (this is the definition of "cognitive agents").

- Second, there is some effect of those actions that is unknown or at least unintended by the agent.

- Third, there is circular causality: a feedback loop from those unintended effects to increment, reinforce the beliefs or the goals that generated those actions.

Fourth, this "reinforcement" increases the probability that in similar circumstances (activating the same beliefs and goals) the agent will produce the same behavior, then "reproducing" those effects (Figure 7).

- Fifth, at this point such effects are no longer "accidental" or unimportant: although remaining unintended they are teleonomically produced (Conte & Castelfranchi, 1995, ch. 8): that behavior exists (also) thanks to its unintended effects; it was selected by these effects, and it is functional to them. Even if these effects could be negative for the goals or the interested of (some of) the involved agents, their behavior is "goal-oriented" to these effects.

Notice that the agents do not necessarily intend or suspect to reinforce their beliefs or their goals, and then their own behavior and the behavior of the other. This is the basic mechanism 18.

18 For a detailed analysis and typology see Castefranchi (1997).

137

Page 30: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

4.1. C o n c l u d i n g r e m a r k s

We believe that no social and cultural phenomenon can be deeply accounted for without explaining how it works through the agents' minds. In fact, since any social phenomena is the result of the agents' concurrent behaviors and since the behavior is controlled and oriented by mental representations, any social phenomenon is the indirect result of the agents' mental representations; it works through and is reproduced thanks to these mental representations. However, the relationship between social phenomena and mental representations is quite complex and indirect. The agents do not understand, negotiate, and plan for all their collective activities and results. Modelling mind is necessary but not sufficient for understanding social phenomena. We sketched some theory for reconciling emergence and cognition, planning and self-organisation, intentions and functions. We believe that this reconciliation is the main challenge of next years at the frontier between cognitive and social sciences.

Intended effects

Unintended effects

Funct ional unintended effects

,.

�9 1, ~ i - i e l - i - i - 1 e l e l . i . i - i - i e l me . i - i - i - i e l e l - i - i - i - i e l - i

Figure 7. (Dis)functional effects of intentional acts

R e f e r e n c e s

Becker, H. (1968) Value, in D.L. Sills (Ed.), International encyclopedia of the social sciences, 16 (New York, The Macmillian Company & The Free Press), pp. 743-745.

Bicchieri, C. (1990) Norms of cooperation, Ethics, 100, pp. 838-861. Bond, A.H. (1989) Commitments. Some DAI insigths from Symbolic Interactionist Sociology, in

AAAI Workshop on DAI. (Menlo Park, AAAI Inc). Bond, A.H. & Gasser L. (Eds.) (1988) Readings in Distributed Artificial Intelligence (San Marco,

Kaufmann). Castelfranchi, C. (1982) Scopi esterni, Rassegna ltaliana di Sociologia, XXIII, pp.329-381. Castelfranchi, C. (1990) Social power: A missed point in DAI, MA and HCI, in Y. Demazeau & J.P.

Mueller (Eds.), Decentralized AI (Elsevier, North-Holland). Castelfranchi, C. (1995) Guaranties for Autonomy in Cognitive Agent Architecture, in M.J.

Woolridge & N.R. Jennings (Eds.), huelligent Agents I, (Berlin, LNAl-Springer). Castelfranchi, C. (1997) Challenges for agent-based social simulation. The theory of social functions,

Invited talk at SimSoc'97, Cortona, IP-CNR.

138

Page 31: Through the agents' minds: Cognitive mediators of social action

Castelfranchi - Through the Agents' Minds

Castelfranchi, C. (1999) Prescribed Mental Attitudes in Goal-Adoption and Norm-Adoption, AI & Law, 1999, 7, pp. 37-50.

Castelfranchi, C. (1998) Modelling Social Action for AI Agents, Artificial Intelligence, 6, pp. 157-182. Castelfranchi, C. (forthcoming) Simulating with cognitive agents: the importance of Cognitive

Emergence, in R. Conte, N. Gilbert, J. Sichman (Eds.), Multi Agent Systems and Agent Based Social Simulation - Proceedings of MABS (Berlin, Springer-Verlag).

Castelfranchi, C. & Conte R. (1992) Emergent functionalitiy among intelligent systems: Cooperation within and without minds. AI & Society, 6, pp. 78-93.

Castelfranchi, C. & Conte, R. (1996) The Dynamics of Dependence Networks and Power Relations, in Open Multiagent Systems. Proceedings" of C00P'96 (Juan-les-Pins, INRIA).

Castelfranchi, C. & Conte, R. (1998) Limits of Economic and Strategic Rationality for Agents and M- A Systems, in Magnus Boman (Ed.), Robotics andAutonomous Systems. Special Issue on "Multi- Agent Rationality", 24, pp. 127-139.

Castelfranchi, C. & Falcone, R. (1998) Towards a Theory of Delegation for Agent-based Systems, in Magnus Boman (Ed.), Robotics and Autonomous Systems. Special Issue on "Multi-Agent Rationality", 24, pp. 141 - 145.

Castelfranchi, C., Miceli, M. & Cesta, A. (1992) Dependence relations among autonomous agents, in Y. Demazeau, E. Werner (Eds), DecentralizedAI- 3 (Amsterdam, Elsevier).

Castelfranchi, C, & Parisi, D. (1984) Mente e scambio sociale, Rassegna Italiana di Sociologia, XXV, pp. 45-72.

Cohen, P. & Levesque, H. (1991) Teamwork (Menlo Park, SRl-lnternational, Technical Report). Conte, R. & Castelfranchi, C. (1993) Norms as mental objects. From normative beliefs to normative

goals, in Proceedings of the 5th European Workshop on MAAMAW Neuchatel - LNAI 957 (Springer, Berlin), pp. 186-98.

Conte, R. & Castelfranchi, C. (1994) Mind is not enough. Precognitive bases of social action, in J. Doran, N. Gilbert (Eds.), Simulating societies: The computer simulation of social processes (London, UCL Press).

Conte, R. & Castelfranchi, C. (1995) Cognitive and Social Action (London, UCL Press). Cranach, M., von Kalbermatten, V., lndermuhle, K. & Gugler, B. (1982) Goal-directed action

(London, Academic Press). EIster, J. (1982) Marxism, functionalism and game-theory: the case for methodological individualism,

Theory and Society, 11, pp. 453-81. Gasser, L. (1991) Social conceptions of knowledge and action: DAI foundations and open systems

semantics, Artificial Intelligence, 47, pp. 107-38. Gilbert, G.N. (1995) "Emergence" in social simulation, in G.N. Gilbert & R. Conte (Eds), Artificial

societies. The computer simulation of social life (London, UCL Press). Grosz, B. (1996) Collaborative Systems, AI Magazine, summer 1996, pp. 67-85. Grosz, B.. & Kraus, S. (I 996) Collaborative plans for complex group action, Artificial Intelligence,

86, pp. 269-357. Jennings, N.R. (1993) Commitments and conventions: The foundation of coordination in multi-agent

systems, The Knowledge Engineering Review, 3, pp. 223-50. Kluckhohn, C. (1951) Values and value orientations in the theory of action (Cambridge, Cambridge

University Press). Koch let, W. (1938) The place of value in a world of facts (New York, Liveright). Krech, D., Crutchfield, R.S. & Ballachey, E.L. (1968) Individual in society: A textbook of social

psychology (New York, McGraw-Hill). Levesque, H.J., Cohen, P.R. & Nunes, J.H.T. (1990) On acting together, in Proceedings of the Eighth

National Conference on Artificial Intelligence. AAAI-90 (AAAI Association-MIT Press, Boston). http://www.soc.surrey.ac.uk/JASSS/JASSS.html

Malone, T.W. (1987) Modelling coordination in organizations and markets, Management Science, 33, pp. 131.7-32.

Malone, T.W. (1988) Organizing information-processing systems: Parallels between human organizations and computer systems, in W. Zachary, S. Robertson, J. Black (Eds.), Cognition, Cooperation, and Computation (Norwood, Ablex).

139

Page 32: Through the agents' minds: Cognitive mediators of social action

Mind & Society, 1, 2000, Vol. 1

Maclntyre, A. (1981 ) After virtue: A study in moral theory (Notre Dame, Univ. of Notre Dame Press). Macy, R. (1998) Social Order in Artificial Worlds, in JASSS, 1, Maslow, A.H. (Ed.) (1959) New knowledge in human values (New York: Harper & Bros). Mataric, M (1992) Designing Emergent Behaviors: From Local Interactions to Collective

Intelligence, in Simulation of Adaptive Behavior 2 (M1T Press, Cambridge). Mayr, E. (1982) Learning, development and culture, in H.C. Plotkin (Ed.), Essays in evolutionary

epistemology (New York, John Wiley). McFarland, D. (1983) Intentions as goals, open commentary to Dennet, D.C. Intentional systems in

cognitive ethology: the "Panglossian paradigm" defended, The Behavioral and Brain Sciences, 6, pp. 343-90.

Miceli, M. & Castelfranchi C. (1989) A Cognitive Approach to Values, Journal for the Theory of Social Behavior, 2, pp. 169-94.

Miceli, M. & Castelfranchi, C. (1992) La cognizione del valore (Milano, Franco Angeli). Miceli, M. & Castelfranchi, C. (forthcoming) The role of evaluation in cognition and social interaction,

in K. Dautenhahn (Ed.), Human Cognition and Social Agent Technology (John Benjamins). Miller, G., Galanter, E., Pribram, K.H. (1960) Plans and the structure of behavior (New York,

Rinehart & Winston). Parsons, T. (1951) The social system (Glencoe, The Free Press). Pepper, S.C. (1958) The sources of value (Berkeley, University of California Press). Rao, A.S., Georgeff, M.P. & Sonenmerg E.A. (1992) Social plans: A preliminary report, in E. Werner

& Y. Demazeau (Eds.), Decentralized AI- 3, (Amsterdam, Elsevier).. Rokeach, M. (1974) The nature of human values (New York, The Free Press). Rosenblueth, A, Wiener, N. & J. Bigelow (1968) Behavior, Purpose, and Teleology, in W. Buckley

(Ed.), Modern systems research for the behavioral scientist (Chicago, Aldine). Rosenblueth, A. & Wiener N. (1968) Purposeful and Non-Purposeful Behavior, in W. Buckley (Ed.),

Modern systems research for the behavioral scientist (Chicago, Aldine). Sichman, J. (1995) Du Raisonnement Social Chez les Agents, Phd Thesis, Polytechnique-Laforia,

Grenoble. Shapira, Z. (1997) Organizational Decision Making (Cambridge, CUP). Steels, L. (1990) Cooperation between distributed agents through self-organisation, in Y. Demazeau

& J.P. Muller (Eds), Decentralized AI (Amsterdam, Elsevier). Tuomela, R. (1993) What is Cooperation, Erkenntnis, 38, pp. 87-101. Tuomela, R. & Miller K. (1988) We-intentions, Philosophical Studies, 53, pp. 367-389. van Parijs, P. (1982) Functionalist marxism rehabilited. A comment to Elster, Theory and Society, 11,

pp. 497-51 I. WilenskY, R. (1983) Planning and Understanding. A Computational Approach to Human Reasoning

(Reading, Addison-Wesley). Williams, R.M Jr. (1964) The concept of values, in J. Gould and W.L. Kolb (Eds.), A dictionary of

the social sciences (Glencoe, The Free Press).

Acknowledgements The article is a new version of a paper presented at the Workshop of the European Science Foundation entitled "Cognitive Theory of Social Action" held in Torino from 11 to 13 June 1998, and organized by the Rosselli Foundation as the first initiative of the scientific network of the European Science Foundation called "Human Reasoning and Decision Making!'.

The ideas presentd in this paper are the product of a long term collective effort of the IP-CNR group on 'Artificial Intelligence, Cognitive Modeling and Interaction' - Social Simulation Project (this justifies so many references of our group). In particular, I would like to acknowledge the contribution of Rosaria Conte (first author of our book on this topic) and of Maria Miceli coauthor of several works in this area. I would like to thank Maria as well as the participants in the Torino meeting, and an anonimous referee of this journal for their comments.

140