38
Artificial Agents: Philosophical and Legal Perspectives Samir Chopra and Laurence White © 2007

Artificial Agents: Philosophical and Legal …schopra/ChopraWhiteChapter2.pdfGettier-type counterexamples has long been held to suggest that knowledge attribution is ripe for a treatment

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Artificial Agents: Philosophical and Legal Perspectives

Samir Chopra and Laurence White

© 2007

2

Table of contents

Chapter 2: Attribution of Knowledge to Artificial Agents and their Principals

1.Introduction.........................................................................................................................3

2.Agents’ Knowledge.............................................................................................................4

2.1 Justified True Belief ....................................................................................................4

2.2 Toward a More Pragmatic Understanding of Knowledge .............................................6

2.3 Some Preliminary Examples ........................................................................................9

2.4 The Pragmatic Analysis of Artificial Agents’ Knowledge ..........................................12

2.5 Artificial Agents’ Knowledge and Naturalized Epistemology ....................................16

2.6 The Pragmatic Analysis Illustrated.............................................................................20

2.7 Objections .................................................................................................................22

2.8 Application to the Courtroom Situation......................................................................24

3.The Legal Doctrine of Attributed Knowledge ....................................................................26

3.1 A Duty to Communicate? ..........................................................................................27

3.2 Attribution of Knowledge to Companies ....................................................................30

3.3 Artificial Agents as Agents for Knowledge Imputation Purposes ...............................31

3.4 Objections ...................................................................Error! Bookmark not defined.

4.Conclusion ........................................................................................................................36

3

Chapter 2

Attribution of knowledge to artificial agents and their

principals

1. Introduction

In this Chapter, we consider the problem of attribution of knowledge to artificial agents and their

legal principals. When can we say that an artificial agent X knows a proposition p and that its

principal can be attributed with the knowledge of p? We offer a pragmatic analysis of knowledge

attribution for answering the first of these questions, and propose that the legal theory of the

attribution of knowledge of human agents can be applied to the case of artificial agents and their

principals.

An agent’s principal is its employer, or any other legal person engaging the agent to

carry on transactions on its behalf. A problem commonly faced by courts is deciding when to

attribute the knowledge of an agent to its principal. A body of law has grown up to deal with

this issue. But if the agent in question is an artificial one, how should the courts decide that a)

the agent knows the proposition in question; and b) this knowledge can be attributed to the

agent’s principal? We need a philosophical account of knowledge attribution that does justice to

the first – more purely philosophical – question, and thereby aids in the resolution of the second

– more purely legal – problem. Conversely, the legal resolution of this issue will aid us in a

solution of the philosophical problem of knowledge attribution, just as legal discourse on

personhood can clarify philosophical debates over the nature of personal identity. As the

delegation of tasks to artificial agents increases, so, we believe, will cases that present the need

4

for decisions on these questions, and render more urgent the need for a solution.

The plan of this chapter is as follows. In Section 2 we present our analysis of knowledge

attribution and illustrate its plausibility by examples and arguments. In Section 3 we describe

the legal problem of attributing knowledge held by agents to their principals, critically

examine the relevant current legal doctrine, and show how our analysis is of help.

2. Agents’ Knowledge

2.1 Justified True Belief

Consider the following knowledge claim: X knows p. This sort of attribution is one of

propositional knowledge, which should be treated separately from other kinds of knowledge

such as knowledge of a place or a person (which might be described as knowledge by

acquaintance (Russell 1984), and knowing how to perform a task (knowledge by competence

(Ryle 2000)). Philosophers have long considered the conditions under which such a claim could

be made. Or, more accurately, they have striven to determine the necessary and sufficient

conditions under which such a claim would be true. In analyses of knowledge, this ideal is

reached when every case that meets the conditions of the analysis is a case in which X knows

that p, thus showing the sufficiency of the conditions, and when every case in which X knows

that p is one in which the analysis’ conditions are met, thus showing their necessity.

Plato’s Theatetus famously analyzed knowledge as justified true belief: i.e., X knows p if

and only if:

1. X believes p;

2. p is true; and

3. X is justified in believing p.

5

The truth condition here captures the intuition that an agent could not know a proposition that

was false; the belief condition the intuition that the agent must bear a particular intentional

attitude—that of belief—to the proposition in question; and the justification condition attempts

to rule out epistemic luck, relying on the intuition that accidental cases of acquiring true belief

do not count as knowledge, i.e., that knowledge must have been acquired in ‘the right way’. So

for instance, if I believe that there are aliens on the moon because I am convinced of this fact by

reading a supermarket tabloid, and it turns out that NASA does discover these next week, we

would be reluctant to attribute knowledge of this fact; my belief does not appear to bear the

right relation to the proposition “there are aliens on the moon”. The justification

condition requires that the belief whose content is the proposition in question must have the

additional property of a being a justified one (the nature of this relation of justification to the

belief in question is notoriously hard to cash out). It does not require that a justified belief is one

where the agent that holds it is actively involved in the act of justification of that belief.

But justified true belief might fail to be knowledge as well. The agent might be

implicated in the kind of situation imagined by (Gettier 1963) who showed by a series of

counterexamples that agents could meet all three conditions and yet not be said to know the

proposition in question.. Consider the following: Smith has a justified belief that “Jones owns a

Ford” because he has seen Jones driving a Ford all over town. Smith therefore justifiably

concludes “Jones owns a Ford, or Brown is in Barcelona”. Here Smith has no knowledge about

the location of Brown but the inference to the disjunctive proposition is perfectly valid. Now, it

turns out that that Jones does not really own a Ford but was merely driving his friend’s car for a

few days. However, by coincidence, Brown really is in Barcelona. Smith’s original inference

still holds but we would be loath to ascribe him knowledge of the proposition in question. Thus,

6

according to these counterexamples, the justified true belief analysis of knowledge fails to

provide sufficiency conditions for knowledge. Despite considerable effort (Shope 1983) no

satisfactory analysis of knowledge has emerged that does justice to these or newer

counterexamples (Chisholm 1989). So for a justified true belief analysis of knowledge to rule

out epistemic luck in this broader sense, it must be extended by a fourth condition that succeeds

in preventing justified true belief from being subject to the kind of counterexamples Gettier

constructed.

2.2 Toward a More Pragmatic Understanding of Knowledge

The philosophical literature is replete with attempts to provide such a fourth condition – and

counterexamples that suggest these amendments do not work (Steup and Sosa 2005).

Armstrong’s “no false belief” condition suggests beliefs should not be based on another belief

which itself is false. Thus, a knowledge attribution fails if the belief in question is based on a

false belief (Armstrong 1973). Goldman’s “causal connection” condition suggests there must

be a causal connection—a particular causal chain of events—between the knowledge and the

belief (Goldman 1967). In response to Gettier’s original counterexample, Goldman would

suggest there is no appropriate causal connection between Smith and his belief that Brown is in

Barcelona (the mere fact of it being a part of a disjunction that involves a justified belief is thus

an “inappropriate connection”). Dretske’s “conclusive reasons” condition suggests there must

be a reason for the belief, such that it would not be true, if the belief itself were false (Dretske

1971). If I believe my friend is walking towards me, the reason for believing he is would not

exist if the belief were false (that is, if in fact someone else were walking towards me).

Nozick suggests knowledge “tracks the truth” (i.e., if the proposition had not been true, then the

7

agent would not have believed it) (Nozick 1981). Perhaps the weakest of these conditions is

the “defeasibility” condition which suggests something is known as long as there is no evidence

to the contrary (Lehrer and Paxson 1969). Each of these attempts to repair the damage done by

the Gettier cases has been rebuffed by analyses that show them to be either too strong (in ruling

out cases where we would plausibly make knowledge attributions) or too weak (in accepting

cases where we would not want to make a knowledge attribution).

This history of failed attempts to provide satisfactory analyses of justification or evade

Gettier-type counterexamples has long been held to suggest that knowledge attribution is ripe

for a treatment grounded in a pragmatic understanding (Dretske 1981). Such an approach would

oppose regarding knowledge analysis to be either about an objective property of “knowing” or

about a special kind of belief that has attained the status of certainty. Knowledge is

considerably demystified by such analyses and becomes, in turn, more amenable to a

naturalistic understanding (Quine 1969). The special nature of knowledge attributions is also

debunked if we accept the Wittgenstenian notion that they refer not to mental states of the

agent, but instead to the activities the agent engages in (Bloor 1983). To know “the window is

broken” is to either perform a particular task with the statement “the window is broken” or to

engage in activities that involve the broken window. Wittgenstein’s analysis avoids the

difficulties in attempting to provide a conceptual analysis or definition of propositional

knowledge by examining how the word “knowledge” derived its meanings from its use in social

contexts.

Thus, contemporary epistemology is most distinctly marked by its rejection of the

requirement of absolute certainty, a move which might seem inevitable given the naturalistic

project its theorizers take themselves to be engaged in. Contemporary epistemologists are more

8

likely to be fallibilists, reluctant to attribute infallibility when they do attribute knowledge. The

most salient contemporary example of fallibilism is contextualism according to which a claim

of knowledge is always indexed to some epistemic standard at play (Lewis 1996). In such an

analysis, standards vary constantly for making knowledge claims, and knowledge claims are

explicitly or implicitly indexed by standards in effect. Hence it is not uncommon to change a

knowledge claim when a question is raised about the knowledge in question: the claim “I know

I filled out my tax-form correctly” may be revised on my being informed that the tax authority

has decided to increase the number of audits by hundred percent this year. The contextualist,

then, treats knowledge attributions as statements carrying a hidden indexical, that of the

standard at play in the conversation. A claim of knowledge that might hold in one context might

not hold in another. Such sympathy for the context-sensitivity of knowledge claims is clearly

built into attributions of knowledge in legal contexts, as we will see below.

Our analysis below is a contextualist one: attributions made within its framework will

always be indexed to some standard in effect. Our attributions of knowledge will necessarily be

imprecise, subject to caveats and disclaimers, and only intended to meet legal standards not

metaphysical ones. Each standard will become visible in the conditions that implicitly invoke

them. Further, in a continuation of our methodological strategy of the first chapter, our account

for artificial agents will implicitly utilize the intentional stance; an agent’s believing a

proposition will be revealed by a particular set of performative capacities. If we consider the

problem of attributing beliefs solved by adopting the intentional stance towards artificial agents

then an analysis of knowledge for artificial agents only need further provide an analysis of

justification, of how the agent’s information was acquired in some principled way, because

more often than not, a belief lacking in such justification is false. Finally, if an analysis of

9

knowledge is framed in terms of capacities and behaviours then the composition of the agent in

question makes no difference to a knowledge attribution made to it.

In what follows, we will substitute our own version of the justification condition and

also suitably supplement the traditional analysis with a condition relevant to the abilities of

artificial agents.

2.3 Some Preliminary Examples

Before providing a formal statement of our analysis of knowledge for artificial agents, we

provide some examples that illustrate the intuitions at its core:

1. As I walk down the street, I am asked by a passer-by, “Excuse me, do you know the

time?” I answer, “Yes” as I reach for my cellphone to check and inform him what time it

is. This example, due to (Clark 2003) shows we take ourselves to know those items of

information that are easily accessible and can be easily used. This example is readily

extended to those cases when we are asked if we know a friend’s telephone number that

is stored in our cellphone’s memory card. While there is a sense of knowledge that

requires no external aids, even this relatively strict sense does not require knowledge

attributions to be only accorded to those beliefs present in physical memory. Consider

the futuristic example of a bionic memory extension of the brain capable of being

disconnected; we suggest we would say the contents of this memory are known in this

strict sense when it is connected, and not known when it is removed. Clark’s example is

part of an extended argument for distributed cognition through external tools and

memory stores not confined to the inside of our craniums. As a further elaboration of the

intuitions behind the suggestion that the contents of easily accessible memory stores

counting as knowledge, imagine someone, who knows that I am carrying a cellphone,

10

pointing to me and suggesting that I should be asked for the time for “he knows what

time it is”. Thus, information at hand can be described as information that we “know”.

The similarity of these examples to cases that do not involve external memory aids

should be clear for I could not claim to know my friends’ telephone numbers if, on being

asked, I were to reply, “I can’t remember”. Similarly, in the context of logical analyses

of propositional knowledge, (Parikh 1995) has described knowledge as statements that

the agent is capable of deriving, from a given set of premises, within reasonable

computational limits as opposed to those implied by epistemic closure (if X knows p,

and if X knows that p implies q, then X knows q). Parikh’s account limits knowledge to

those q’s that are derivable within these computational limits and thus provides an

analysis of knowledge in terms of the ability of the agent to ‘access’ (derive) the

proposition.

2. A friend wants to buy me a book as a gift. He asks me for my shipping address so that

he can send me the book. I direct him to my wish-list at Amazon.com saying, “Amazon

knows my shipping address”. Indeed, the agent operating the shopping site on behalf of

Amazon seems to know my address. When my friend has decided which books he wants

to buy, he pays with his credit card, and picks the home address shipping option (to

maintain my privacy Amazon will not show my shipping address to my friend; he need

not supply this information to the shopping agent on the site). Amazon generates a

shipping invoice complete with shipping address. The agent is able to discharge its

functions using that piece of information. I had stored the shipping address on Amazon

precisely for such future use. I could store an alternate shipping address and forget its

details, since Amazon ‘knows’ it and will ship to it if anyone decides to send me a book

11

at that address. Note that aggregated geographical information could be obtained from

Amazon’s database (by Amazon.com programmers) by writing specific queries (e.g.,

“Which customers live in postal code 11218?”). In a weaker sense, then, Amazon.com

then could also be said to know the answer to this query. Of course, unless such a query

functionality is already built into the artificial agent operating Amazon’s shopping

website, the agent itself (as opposed to the humans who write and run the query) knows

no such thing. Informational content that can be used in particular functional roles, is

plausibly viewed as playing the role of knowledge; the entity using it then, ‘knows’ the

information in question.

3. I have to attend a meeting at the university campus branch located in the city centre.

With directions for the meeting written down on a piece of paper that I keep in my

pocket, I head out the office door. As I do so, my office-mate asks me “Do you know

where the meeting will be held?” I answer, “Yes” as I hurry towards the next train. Here,

Gricean semantics (Grice 1975) is at play: if I said I did not know the meeting’s

location, I would be misleading my questioner. This example is crucial in showing that

knowledge claims are connected to the pragmatics of speech. To deny a valid knowledge

attribution in this case would be to say something misleading or something whose

semantic value is considerably less than the knowledge claim. This example captures

too, the Wittgenstenian intuition that part of the meaning of the locution “X knows that

P” is determined by the conditions under which people would be willing to make the

claim or find it unreasonable not to make that claim. Such an assessment need not be

linked to any particular action being taken by the agent to whom the requisite knowledge

is being attributed. Consider the act of pointing at a friend who walks by and saying “He

12

knows what I did last summer” (here “What I did last summer” is shorthand for a set of

sentences like “I went hiking; I did a little writing” etc). My friend, the one being

pointed at, might not be thinking about my activities last summer, or taking any actions

or speaking. Instead we indicate that we have evidence this knowledge has been

obtained by him; we indicate a dispositional property of my friend.

Knowledge claims then, speak to a bundle of capacities, functional abilities, and dispositions;

their usage seems intimately connected to a pragmatic semantics.

2.4 The Pragmatic Analysis of Artificial Agents’ Knowledge

In our analysis of knowledge attribution, an artificial agent X knows a proposition p if and only

if:

1. p is true;

2. X has ready access to the informational content of p;

3. X can make use of the informational content of p;

4. X must have acquired the informational content of p in non-accidental fashion (‘the

right way’); and

5. X acquired this informational content using a reliable cognitive architecture.

These conditions follow from the fact that we have decided to adopt the intentional stance

toward artificial agents that display evidence of rationality. That done, we have rewritten the

conditions of the justified true belief analysis in such a way that they do not explicitly reference

beliefs but rather functions or capacities that make the intentional stance the correct one to

adopt.

First, through Condition 1, we retain the truth condition of the original analysis. We then

introduce four new conditions. Condition 2 uses the notion of ready access to the information

13

content of the proposition p; an agent must have some way to conveniently access this

information, without requiring convoluted or difficult procedures. In the case of artificial

agents, the item must be in some readily available memory store. Thus, for example, an

artificial agent required to conduct gigantic searches of its disk storage, or submit multiple

computationally expensive queries before being able to locate a particular item would be

pushing the limit of the plausibility of such ascriptions.

Condition 3 requires the agent to display its access to the informational content of p by

being able to use it to discharge its functions, or, more broadly, to display a competence

contingent upon having such access. It informally entails conditionals such as “If X did not

know p, then Y” where Y is a statement like “X is not able to exercise capacity C”. If X is able to

exercise capacity C, it shows its knowledge of p. Dispositional (Levi and Morgenbesser 1964)

or functionalist accounts (Armstrong 1980) of knowledge have provided similar conditions in

the past. While condition 3 on its own might seem sufficient in itself as X could only make use

of the information content of p if it had access to p, condition 2 is desirable in placing some

constraints on how this information should be accessible for further use. It is not enough that p

be accessible, perhaps through the kinds of computationally inexpensive procedures noted

above, for use by the agent; it should be readily accessible in a way that is facilitative of ease of

use. Conditions 2 and 3 together replace the belief condition and point to the appropriate

conditions for the adoption of the intentional stance toward the artificial agent; if we are to take

the intentional stance towards an agent these are the capacities the agent must display over and

above other appropriate behavioural displays.

Condition 4 requires us to show that an item of knowledge attributed to an agent is one

that was acquired in the right way, for the right reasons, that it was not just dropped into its

14

memory store by mistake, or by fluke. In essence the fourth condition says the agent acquired

the information in question in the course of its normal functioning, that it acquired the

information by well-defined procedures, that another agent similar in architecture under the

same circumstances would acquire it too. In general, we take condition 4 as making an

important nod toward the agent’s epistemology, which is made explicit in condition 5.

Condition 5 is similar to the so-called reliabilist condition: S knows that p if, and only if,

S’s belief that p (i) is true and (ii) was produced by a reliable cognitive process (so that S’s

belief becomes impervious to the Gettier-style counterexamples explained above) (Armstrong

1973; Dretske 1981; Nozick 1981). Reliabilist conditions in analyses of knowledge are intended

to displace justification conditions and, crucially for our project, make broader the scope of

subjects of knowledge attribution, for a “reliable cognitive process” is the sort of thing that

facilitates the transfer of information, and can therefore be understood as making it possible for

a broader class of beings to be in the possession of knowledge. Such an understanding enables,

for analyses of knowledge, “a characterization that would at least allow for the possibility that

animals . . . could know things without my having to suppose them capable of the more

sophisticated intellectual operations involved in traditional analyses of knowledge” (Dretske

1985, 177) We feel comfortable making the claim that “the cat knows a mouse is behind the

door” though we do not have a clear idea of what kind of inferential or justificatory mechanism

is being employed; whatever kind of retrieval or inference mechanism is at work, it enables the

cat to go about its tasks and the cat reveals its knowledge through its actions. The application of

these considerations to artificial agents should be evident: we need merely isolate the relevant

cognitive processes in an artificial agent that make the procurement of information possible for

it, and verify they work appropriately. These processes would be presumably be enabled by the

15

functional sensor architectures that gather information from its surroundings and those

procedures that make this information available, after suitable validation, to its memory and

storage banks.

The contextualist flavour of our analysis should be clear, for we do not intend to

excessively sharpen the conditions for “X has ready access to p”, “X is able to make use of the

informational content of p”, “X acquired p in the right way” and “X acquired the informational

content of p through reliable cognitive architecture” The determination of what counts as “ready

access to” and “can make use of the informational content of p” will be made by the entity

making the attribution; similarly for the latter two conditions though we suggest the verification

of the satisfaction of these conditions would be more straightforward. In general, the standards

at play when making a particular knowledge attribution will be determined by the agent’s

circumstances, its operating conditions, the task it is engaged in (and its importance given its

overall objectives), the resources available to the agent and so on.

In logical frameworks for artificial intelligence, an agent’s belief corpus is taken to be

the set of propositions that the agent is committed to i.e., those propositions p the agent answers

“Yes” to when asked “Do you believe p?”. Formally, p is derivable using the inference

machinery built into that agent’s architecture. Thus I could say of an artificial agent that it

knows or believes p if p is derivable from its knowledge base. By analogy, in our analysis, an

artificial agent reveals its knowledge of p through the ready availability of the proposition in

facilitating the agent’s functionality, so that rather than needing to answer “yes” to the query

“do you know p” it demonstrates its knowledge by its functions. Of course, an artificial agent

with a hybrid architecture, that employs a deductive database, could be coherently said to know

the further information that could be derived by applying inferential rules to its set of stored

16

facts. In that case we would say the agent in question knows all the facts derivable from its

database as well, subject to computational limitations as mentioned above. We would not

however, want to limit knowledge attribution to those agents that are capable of deriving the

proposition p using a formally specifiable inference mechanism.

2.5 Artificial Agents’ Knowledge and Naturalized Epistemology

We do not need to adopt the position that a belief’s origin in a reliable cognitive process is

sufficient for its being an instance of knowledge for the artificial agent. We can use it to

however, to supplement the non-accidental acquisition condition. This approach is

philosophically satisfying in allowing us to draw upon a naturalized epistemology; knowledge

analyses become part of a more inclusive naturalism as we consider knowledge and

justification natural phenomena, locatable in the physical order of things much as we locate

chairs, tables, DNA and molecules. To do this we have pointed to those “non-epistemic grounds

on which epistemic phenomena supervene”, as

“the term ‘justified’… is an evaluative term, a term of appraisal. Any correct definition or synonym of itwould also feature evaluative terms. … [S]uch definitions or synonyms might be given, but … [we] wanta set of substantive conditions that specify when a belief is justified … [A] theory of justified belief[should] specify in non-epistemic terms when a belief is justified. (Goldman 1979, 1)

In the case of artificial agents, knowledge claims plausibly supervene on its capacities and

behaviours, and their particular cognitive architectures that enable these capacities and

behaviours. A reliable cognitive architecture, like an appropriately functioning sensor of a

robot, would gather information with a high degree of fidelity; the information would acquire

the status of knowledge on being readily available for use in the robot’s functioning. Reliability

thus provides a suitable non-tenuous connection between justification and truth in the case of

artificial agents: “a reliable belief-producing mechanism… would yield mostly true beliefs in a

sufficiently large and varied run of employments in situations of the sorts we typically

17

encounter” (Alston 1993, 9). Reliable belief production then, is an integral part of the

transformation of true belief into knowledge.

If an agent elicits information from humans, then the responsibility for ensuring the

accuracy of the information is partially the users’ and partially the agent’s. If the user inputs

incorrectly, the agent is in possession of false information and we do not make the knowledge

attribution. An agent can verify however that the information supplied meets minimum

standards of integrity (as in ensuring that names are not made up of numbers or that social

security numbers do not contain alphabetic symbols) Thus, the artificial agent partially inherits

its epistemology from the humans that supply it information and carry out data entry. This

should not lead us to think that artificial agents do not have an independent epistemology.

Pricebots that read price information on remote web pages acquire knowledge autonomously,

by using their file-reading mechanisms, presumably equipped with error checking and

validation routines—the software equivalent of a reliable sensor—that guarantee they will not

read in garbage. Furthermore, artificial agents might acquire information from other artificial

agents; in these cases, again the determination of the accuracy of the information in

question—via error-checking or other procedures—will be the autonomous responsibility of the

agent. The accuracy with which these agents acquire information is a function of their design

and the code that runs on them – very similar to human agents, the accuracy of whose beliefs is

a function of how well their senses work in conjunction with their background knowledge and

reasoning powers.

Because even ostensibly reliable belief-formation processes can be occasionally

mistaken, we will need to tie such a reliabilist account to some contextualist understanding of

how we say, given a particular environment, what our standards are for normal, reliable belief-

18

formation processes. This is a reasonable requirement and we expect different artificial

agents—equipped with sensory equipment of different sensitivity and power—to require

varying standards of assessment of the reliability of their belief-formation processes. These

sorts of assessments are precisely the kinds that are amenable to practical determination in

courts of law.

It is instructive here to note parallels with knowledge attribution to children and animals

where we feel comfortable making such attributions in the case of partially-developed or non-

existent verbal capacities. In these circumstances, we look for evidence that the child or animal

had been exposed to perceptual data that might have led to the formation of such knowledge, we

evaluate the truth condition ourselves, and then look for performance data, that the child or

animal used the information like someone who knew the proposition in question (Cheney and

Seyfarth 1990; Hare, Call, and Tomasello 2001; Povinelli and deBlois 1992; Virányi et al.

2006). If we can coherently ascribe knowledge to animals (and there is no hint of incoherence in

our knowledge attributions in this regard), then even if animals are not capable of the

sophisticated mental operations required for internal justification, it should be clear knowledge

attributions form a particular way of referring to capacities displayed by entities of varying

abilities; artificial agents’ knowledge needs to evaluated in terms of their comportment with

such ways of speaking, and not with some inner state, or the existence of a ‘knower’ or some

inexplicable relationship between the informational content of a proposition and the internal

states of the agent. Knowledge becomes comprehensible as a natural phenomenon, possessed by

humans, animals and artificial agents alike.

Sosa suggests a distinction between two concepts of knowledge, the animal and

reflective (Sosa 1997). The former is reliably formed true belief and the latter is internally

19

justified true belief; both are taken to be resilient with respect to Gettier counterexamples. If we

wanted to make a distinction between kinds of knowledge in terms of the intellectual capacities

of the agents concerned, the former would be one common to animals and humans and the latter

only possessed by beings able to indulge in reflective thought. Such a distinction might be

useful to those who think animals and small children cannot have justified beliefs: “Lower

animals, very small children, and idiots acquire and utilize much perceptual knowledge

concerning the immediate environment; otherwise they would not be able to move around in it

successfully. But they are not capable of acting in the light of rules. So [justification as a

normative property] is at best a necessary condition for the knowledge possessed by the likes of

normal mature human beings.”(Alston 1989, 173) (for a contrary view, see (Russell 2001)).

In the case of artificial agents both notions of knowledge can be profitably used

depending on its particular architecture. As in Chapter 1, if an artificial agent has only displayed

capacities that would lead us to grant it dependent legal personality, we might only take it as

being capable of ‘animal knowledge’ - it has perceptual powers shown by its sensory capacities

but not enough deliberative or reflective capacity. Once however, it becomes possible to

coherently speak of the artificial agent as ‘acting in the light of rules it has justified to itself’

then we can ascribe reflective knowledge. Plausibly this would occur, as we noted in our

discussion of independent legal personality, when the artificial agents becomes capable of

intellectual reflection by forming second-order thoughts about its other beliefs and desires. But

these distinctions are moot as regards the particular conditions of our analysis; as far as those

are concerned, when they are satisfied, the knowledge attribution can be made.

From the perspective of a naturalistic epistemology, there is an exact similarity between

humans and artificial agents - for as in the case of the former, an agent’s cognitive architecture

20

and capacities determine its abilities knowledge-gathering and acquisition. Quine famously

suggested that in a naturalistic world-order, questions of classical epistemology become

empirical hypotheses of a cognitive science (Quine 1969); we see no reason why it should be

different for artificial agents.

2.6 The Pragmatic Analysis Illustrated

When we say, “Amazon.com knows that my shipping address is X”, our analysis implies the

following (we use the Amazon.com example for simplicity – in practice, we expect this analysis

to apply to more sophisticated agents but its prima facie plausibility in this case should show the

ease with which it will be extended to those as well):

1. The shipping address is correct. This truth condition is critical for if the shipping address in

question were incorrect we would not say that Amazon knows the shipping address. The

locution we would employ if Amazon were to use the incorrect shipping address would be

“Amazon shipped my books to what it thought (or believed) was my correct address”.

2. Amazon has ready access to my shipping address in its databases. We would not say

Amazon knows my address if it was stored in the shopping agent’s database but is not

accessible for use by the agent (or only accessible after the execution of very

computationally expensive procedures). In that case, we would say that the information in

question is stored in the agent but the agent does not know it. As an analogy, when files are

deleted from a computer the information ordinarily does not vanish, it simply becomes a

target for over-writing. The information is not accessible any more without the use of

special recovery methods, and hence the computer’s operating system is reasonably enough

said not to have access to it any more, even if the operating system supports software that

allows for its retrieval.

21

3. Amazon is able to make use of the informational content of the address. This is evidenced

by the fact that shopping agent the agent is able to use my address to fulfil its functions. It

was stored in their database and had been used successfully in the past. Amazon is capable

of informing a potential customer that it is unable to ship goods since it does not ‘know’ the

customer’s address (or credit card number). If Amazon has access to my address but it has

changed in the meantime, then it is natural to say that Amazon does not know my address

since it is not able to perform the function of shipping books to me. Fundamentally, we take

Amazon to be capable of performing certain tasks if and only if it had my shipping address;

it is able to do them; therefore, it knows my shipping address. Thus, the relevant

conditionals are true: if it did not know my address, Amazon’s core functionality with

respect to its interactions with me would not be achievable; if Amazon did not know my

shipping address, it would not be able to send books to me; if Amazon did not know my

address, it would not be able to send me a bill; but it is able to do so; hence it knows my

address. This kind of analysis is plausibly extended to artificial agents that take actions

based on information at their disposal; their actions could be described in much the same

way as a human agent’s: “The pricebot (i.e., price-quoting agent) sent me a quote because it

knew my preferences”. And we could predict Amazon’s responses to certain queries. For

instance, if Amazon had organized its database via ISBN numbers, we could say that

Amazon knows the ISBN number for How Green Was My Valley since it would be able to

produce that number on request. We could also predict Amazon’s success in demonstrating

its knowledge of the ISBN number of How Green Was My Valley by shipping me that book

and none other.

4. Amazon acquired this relevant information in the right way. A user should have, for

22

instance provided it deliberately, using without error, the website’s data input mechanisms.

Thus, it did not accidentally acquire my address because someone typed it in by mistake.

We would not say that the agent knows my address if it was stored under someone else’s

name. Note that if Amazon did acquire this information in this accidental fashion, it would

be able to meet condition 3 as it would still be able to discharge the function of shipping

books but it would be sending books to my address that I had not ordered (storing my

address in the location intended for someone else). In that case, one would say it had

discharged its functions (or displayed a capacity contingent on knowing my address), but

done so incorrectly. If it did correctly send books I had ordered to my address, it would be

because I would have verified the address was the right one. In that case, the fourth

condition would have been met for my actions would have rectified the accidental nature of

the original acquisition. It should be clear the precise determination of what counts as lucky

for a particular agent will depend on what the agent’s capabilities are.

5. Amazon acquired this information using a generally reliable information-gathering

cognitive architecture. In the case of Amazon’s website, this would mean its form-

processing code worked correctly i.e., it was reasonably bug-free, it carried out integrity

checks on data like making sure US zip codes were at least five digits long, that a city name

and two-letter state code had been provided, and asked users to confirm the correctness of

their inputs. In sum, its information-gathering architecture, would in most circumstances,

not corrupt user input data, and would transfer it reliably to the back-end database scripts

that populated its data stores.

2.7 Objections

One way to deny that artificial agents can know propositions would be to ask, “Who does the

23

knowing in the case of the artificial agent?” Our response would be that the same could be

asked of humans, and in the absence of any philosophically satisfactory analysis of personal

identity that does not apply equally well to artificial agents (as our discussions in Chapter 1

would have indicated) there is no reason to believe that a stronger condition should be placed on

artificial agents. In any case, in Chapter 1, we explored the conditions under which artificial

agents might be accorded legal personality—of the dependent and independent kinds—and

eventually even the ‘Lockean’ personhood possessed by human agents. Artificial agents

meeting those conditions could be the persons that ‘did the knowing’. Still, by framing the

question in terms of ‘who’ rather than ‘what’ does the knowing, the presumption is made that

only a person can know something. In fact, we consider it perfectly natural to suppose that

artificial agents such as Amazon.com’s which are not persons do, quite literally, know all sorts

of facts, even if only in the minimal sense of animal knowledge discussed above. An alternative

locution would be to say, “Amazon has my address”, but what purpose would be served other

than an avoidance of intentional vocabulary? If it did not ship to the correct address, Amazon

could not use as a defence the claim that it did not know (or “have access to”) my address. In

making knowledge attributions, there is a parallel between humans and artificial agents. The

ease with which we slip into the intentional attribution when it comes to Amazon.com is an

indication of this similarity.

It might be objected that Amazon does not know my address, that the word “know” is

being misused as a fiction, a convenient shorthand that obscures some crucial distinction

between this kind of attribution and the attributions we make in the case of humans. But what

sense would it make to say that Amazon did not know the address? Alternative locutions for

describing this functionality of Amazon’s would be artificial. What could we say, that Amazon

24

has ready access to this true information which it had acquired deliberately using a reliable

cognitive architecture and can use it to discharge its functions? We think it more plausible to

simply say “Amazon knows my address”. This shorthand is not a fiction, for the parallels with

knowledge attributions to human agents should be clear: human agents are said to know a

proposition p when we can make such a claim about such a cluster of achievements and

capacities.

In the case of Amazon.com, it is possible that not a single human being employed by

Amazon knows my address. It is conceivable that when the shipping invoice is printed out by

the artificial agent(s) operating the databases that operate Amazon’s internet shopping

businesses, a human clerk will pick it up and attach it to the box of books in question without

bothering to check any further whether the address is correct or not. When a book is purchased,

my address has been used by Amazon.com without any human knowing it. Note too, that the

software has been treated as a reliable source of information with regards to the address, and

thus humans might accurately claim that they learned a proposition from an artificial agent. And

humans, of course, do often learn items of information from artificial agents.

2.8 Application to the Courtroom Situation

How should courts make knowledge attributions in the case of artificial agents, of the form “Did

agent X know p at time T”?

We should note first that Anglo-American law takes a somewhat “deconstructive”

approach to knowledge, seeing it more as a spectrum of behaviours rather than as a single

unitary concept. There are many different classification schemes for knowledge depending on

the area of law, but for the purposes of illustration we will focus on one. In trust law, the

doctrine of constructive trusts is used to fix a person with liability to return trust money or

25

property when he or she receives it from a trustee who is in breach of trust, but only if the

person receives that money or property with knowledge of the trustee’s breach of trust, and for

these purposes, the Courts distinguish between five degrees of knowledge of a fact:1

(i) actual knowledge; (ii) wilfully shutting one's eyes to the obvious; (iii) wilfully and recklessly failing tomake such inquiries as an honest and reasonable man would make; (iv) knowledge of circumstanceswhich would indicate the facts to an honest and reasonable man; (v) knowledge of circumstances whichwould put an honest and reasonable man on inquiry.

Our analysis of knowledge could be deployed in fairly straightforward fashion in determining

whether conditions (i), (iv) or (v) are met. Similar analyses based on the intentional stance and

competence conditions could be devised to assist with conditions (ii) or (iii). For example,

wilfully shutting one’s eyes to the obvious could be analyzed in terms of failing to make

appropriate deductions and/or take appropriate actions from facts known to the agent. Similarly,

wilfully and recklessly failing to make such inquiries as an honest and reasonable man would

make could be analyzed in terms of failure to take appropriate actions given either actual

knowledge of relevant facts which would cause such a man to make such inquiries, or

knowledge in the sense of any of the other conditions of such facts.

While we consider that our analysis of knowledge usefully clarifies the conceptual basis

for knowledge ascriptions to artificial agents, the courts could adopt alternative strategies or

rules of thumb to ascribe knowledge to artificial agents, in circumstances where applying the

analysis either raises issues of evidence which are too difficult, or where it is agreed by the

parties that the analysis need not be applied. For example, there may simply be no relevant

evidence of the extent to which an agent had access to an item of information at a particular

time, or whether the item of information was gained by the agent in “the right way”. We think

there are two alternative strategies open to courts in such cases.

1 Baden v. Société Générale, [1993] 1 WLR 509, 575-76.

26

Firstly, our analysis need not be deployed at all if the agent itself can give reliable first-

hand evidence as to its prior state of knowledge. Only some very sophisticated agents will

conceivably be able to answer questions put to them as part of the court’s procedures as to their

state of knowledge at a prior time. For these agents, the court would need to have the agent’s

response to such a question, and detailed evidence as to the reliability of such responses on the

part of the agent. This will lead it into inquiries about the capacity of the agent to engage in

deceptive behaviour, and whether the agent’s cognitive architecture was functioning normally

(to rule out those cases where an agent might claim knowledge incorrectly).

Secondly, where direct evidence from the agent is not available, or there is reason to

suspect that it is not reliable, then other evidence will need to be considered. The law has

always admitted circumstantial, as well as direct, evidence of the state of mind of a person

which is relevant to proceedings, although in some jurisdictions there are special rules as to the

probative weight to be accorded to circumstantial evidence. For example, in R v Chamberlain

[1984] 51 ALR 225 the High Court of Australia held that in a criminal case an inference of guilt

must not be made unless each one of the primary facts on which the case rests is proven beyond

reasonable doubt. A person’s actions and behaviours are circumstantial evidence of the person’s

intentions and beliefs, and the actions and behaviours of an artificial agent will also be used as

circumstantial evidence of its beliefs and motivations at particular points in time. In this way,

legal practice can be seen to track quite closely the intentional stance method of belief and

intention ascription which we have already discussed in depth in Chapter 1.

3. The Legal Doctrine of Attributed Knowledge

This inclusion of the ready-to-hand in the knowledge of an agent has close and instructive

parallels in the legal doctrine of attributed knowledge. Under this doctrine, the law may impute

27

to a principal knowledge – relating to the subject-matter of the agency – which the agent

acquires while acting on behalf of its principal within the scope of its authority. The scope of

the agent’s authority refers to those transactions that the principal has authorized the agent to

conduct. In some circumstances discussed below, knowledge gained by the agent outside the

scope of the agency can also be attributed to the agent’s principal.

Once knowledge is attributed to the principal, it is deemed to be known by the principal

and it is no defence for the principal to claim he did not know the information in question, for

example, because the agent failed in its duty to convey the information to the principal.

The doctrine of attributed knowledge has many applications, and is used generally in civil law

contexts in cases where the knowledge of the principal is relevant: for instance, in relation to

allegations of principals knowingly receiving trust funds, or having notice of claims of third

parties to property received, or knowingly making false statements.

The doctrine has close parallels with our analysis above, which extends the concept of

knowledge to include the information that we retain in storage devices – including written

documents – that are ready-to-hand. From this perspective, a human agent is akin to a

knowledge storage device under the control of a principal. Below, we suggest that artificial

agents can be thought of similarly. But first, we explore the basis of the doctrine and its

application to the modern company.

3.1 A Duty to Communicate?

While the doctrine of attributed knowledge is pervasive in the legal systems under discussion,

its precise doctrinal basis is still a matter of some dispute (DeMott 2003, 311-312).

One explanation of the doctrine relies on the supposed identity of principal and agent, whereby

the law sees them as one person for some purposes. However, this theory lacks explanatory

28

power in that it poses this identity as an unanalysed fact, and so does not explain the public

policy justification for the rule.

Another explanation put forward for attributed knowledge is that the law presumes that

agents will carry out their duties to communicate information to their principals. For example,

in the standard English practitioner’s text Halsbury’s Laws of England, the scope of an agent’s

duty to communicate determines the existence and the timing of any attributed knowledge of the

agent.2 Under this approach, the doctrine operates on a pre-existing duty to convey information

to deem that the duty has been discharged.

In the US, the common law of agency does not require as a precondition an existing duty

to communicate the information to the principal. As both (DeMott 2003, 315) and (Langevoort

2003, 1215) point out, the description of attribution as the presumption that the agent has

fulfilled its duty of candour in conveying information is not correct, since attribution applies

even where interaction between principal and agent creates enough scope of discretion that no

transmission of information is expected. In England the “duty to communicate” has been

abandoned as the explanatory basis of attribution of knowledge.3 Similarly, Australian courts

have inferred attribution of knowledge in the absence of a duty to communicate information in

cases where the task assigned to the agent included making appropriate disclosures.4

We believe the duty to communicate as the doctrinal basis of the attribution of

knowledge, as well as being an inaccurate model of the law as it stands, is wrong on policy

grounds. To require such a duty in order to attribute knowledge held by agents to their

2 Vol. 2(1) (Fourth Edition Reissue) Agency, para 1643 El Ajou v Dollar Land Holdings plc & Anor [1994] 2 All ER 685, at 703–4 per Hoffmann L.J. See also (Bowsteadand Reynolds 2001), Article 97(1) at paragraph 8-207.4 Permanent Trustee Australia Limited v FAI General Insurance Company Ltd (in Liq) [2003] HCA 25 atparagraph 87.

29

principals would encourage principals to ask agents to shield them from inconvenient

information, and would put principals acting through agents in a better position than principals

acting directly. Such an approach is also incompatible with modern information management

practices within companies, and we discuss why below.

However, the fact that agents are capable of communication is important to the

attribution of knowledge. In terms of our analysis of knowledge, a lack of capacity to

communicate information would render the second and/or third conditions unfulfilled – i.e., that

the principal has ready access to the knowledge held by the agent (even if he or she does not

exercise it), or that the principal can make use of its informational content (even if he or she

chooses not to). The capacity to communicate therefore plays an explanatory role when thinking

about how artificial agents might fit within this legal schema.

The relevance of the capacity to communicate plays a role in those cases which concern

whether confidential information held by the agent and received in confidence from a third

party should be attributed to the agent’s principal. There is some judicial support for the

proposition that in these circumstances, the information should not be attributed to the principal,

at least in ordinary cases.5 This accords with the shape of our suggested analysis of knowledge

in the case of agents, that it is only knowledge that is ‘ready to hand’ that should be attributed.

Clearly, in one sense at least, information which an agent is prohibited from sharing with the

principal (or any other agents of the principal) is not ‘ready to hand’ to the principal, and cannot

be so attributed. We will consider this topic further in Chapter 5 relating to artificial agents and

5 See Harkness v Commonwealth Bank of Australia Ltd (1993) 32 NSWLR 543, Niak v Macdonald [2001] NZCA123 and Waller v Davies [2005] 3 NZLR 814. These cases are discussed in (Watts 2005) who however casts doubton the proposition at hand and points out that if the principle is valid there may be some exceptions to it.Specifically, Watts suggests that the principal cannot be used where the disclosure of the information to theprincipal would be of benefit to the third party; and where the information concerns fraudulent behaviour of thethird party (at p. 325–6).

30

privacy.

3.2 Attribution of Knowledge to Companies

A company is a special kind of organisation that, in modern legal systems, is recognised as a

legal person in its own right. How, then, does a company gain knowledge in the eyes of the

law? Apart, possibly, from knowledge gained “directly” by the Board or general meeting of a

company, only through the attribution to it of knowledge gained by its agents i.e., its directors,

employees or contractors. By the doctrine of attribution, the company is deemed to gain the

knowledge that is gained by the natural persons (i.e., humans) engaged by it6.

The large modern company illustrates why the “duty to communicate” cannot found the

attribution of knowledge. Given the company is an abstract entity, the only way to make sense

of such a duty would be in terms of communication to other agents (such as immediate

superiors), who are in turn required to communicate it either to other agents similarly placed or

“directly” to the company as embodied by the board of directors or general meeting. Since in

modern corporations the authority to enter and administer contracts and carry on litigation is

often delegated to staff members or even contractors, it would be absurd if all the information

gained in the course of doing so had to be communicated upwards in this way in order for it to

count as knowledge of the company. Instead, most information within the modern corporation

remains with lower-level officers, and is only passed upwards in summary terms – or when

there is some exceptional reason to do so, such as a dispute with outside parties. Abandoning

the “duty to communicate” allows the legal system to acknowledge how information is

managed in accordance with modern decentralised practices.

6 See Halsbury’s Laws of England, Companies, 7(1) (2004 Reissue), paragraph 441: How a company may act.

31

Today the most common way for information to be stored and controlled by low-level

officers is by inputting it into the company’s information systems. Some of these systems can be

queried by senior managers, but it has never, to our knowledge, been suggested that this is

essential to the attribution of the information held within them to the company. To what extent

could information systems – artificial agents – themselves be treated by the legal system as

agents for the purposes of attribution of knowledge?

3.3 Artificial Agents as Agents for Knowledge Imputation Purposes

In Chapter 1, following (Kerr 1999), it was argued that the legal system could extend the legal

treatment of human agents to artificial agents, with appropriate modifications. Artificial agents,

on this approach, could have a legal status akin to slaves in Roman law – that is, with capacity

to enter contracts on behalf of their principals, but without contracting capacity or legal

personhood in their own right – or they might be treated as agents which are legal persons in

their own right.

A similar move could be made with respect to the imputation of knowledge. On this

approach, knowledge gained by artificial agents employed by corporations and other principals

could be attributed to the principals themselves, if that knowledge would be attributed to the

principal in the case of a human agent. Not all the agent’s knowledge would necessarily be

attributed to the principal. For example, an agent could act for two or more principals in

different circumstances, and in accordance with the law of agency, knowledge gained in the

course of one agency is not always attributed to the other principal.7

7 On these cases, English and US law take divergent and sometimes confusing approaches: see (DeMott 2003);(Bowstead and Reynolds 2001) at paragraph 8-210; Permanent Trustee Australia Co Ltd v FAI General InsuranceCo Ltd (2001) 50 NSWLR 679 at 697 per Handley JA.

32

The scope of the agency could be defined as those transactions which the artificial agent

has been deployed to conduct. A natural person could deploy an artificial agent, and in that

instance the agent’s knowledge would be attributed to the principal in the same circumstances.

Surprisingly, there is a paucity of judicial pronouncements on the possibility of attribution of

knowledge held by artificial agents to their corporate principals. Nevertheless, some judicial

support for such a treatment of artificial agents was given recently.

In the Australian case Commercial Union v Beard & Ors8, the issue arose whether a fact

contained in a news clipping, filed in a company paper file, was “known” to an insurer for the

purposes of the relevant statute. If it was known to the insurer, the party taking out insurance

was relieved of the obligation of making disclosure of the fact to the insurer. The majority

found that a matter could be “known” by the insurer company if it were contained in the

“current formal records” of the company. However, the majority held that the contents were not

“known” to the company for the purposes of the statute:

An extract from a newspaper does not amount to knowledge, it is merely a source from which knowledgecan be gained. Access to a means of knowledge is not sufficient: Bates v Hewitt LR 2 QB 595. I do notsuggest that information set out in the current formal records of a company may not, in appropriatecircumstances, constitute knowledge. Of course it may, for records are an appropriate means of storingknowledge. However, the extract from the Sydney Morning Herald was not a record of CommercialUnion and it was not contained in any file to which officers of Commercial Union were expected to haverecourse for the purposes of the subject insurance.9

The minority judge, however, disagreed that anything could be “known” to the company merely

by being contained in a record, while acknowledging that such a view had its attractions

[emphasis added]:

We were not referred to any authority for the proposition that, in the absence of actual knowledge on thepart of relevant officers of a company, the company may, nevertheless, “know” a matter, where therelevant information is contained in a company file. I find the proposition an attractive one. Incircumstances, which are undoubtedly common today, where important information relating to the

8 [1999] NSWCA 4229 per Davies AJA (with whom Meagher JA agreed) at paragraph 63

33

conduct of a company’s business is stored in the company’s computer system, from which it may bereadily obtained, the suggestion that such material is part of the company’s knowledge is certainlyappealing. However…the present state of authority does not permit a finding that the information sostored becomes “known” to the company until it is transferred into the mind of an officer, who isrelevantly engaged in the transaction in question10.

Note that the emphasis on being readily obtainable echoes condition 2 of our analysis of

knowledge.

Although the majority judgement did not address information systems specifically, we

believe there is nothing to prevent an artificial agent being treated as a ‘current formal record’

of the company for these purposes, and that where information or knowledge is stored in an

information system of the company which is readily accessible and to which employees are

expected to have regard for the purposes of particular activities, the company will be deemed to

have knowledge of the contents of that system for those purposes. Knowledge held by an

artificial agent would be a special form of such knowledge.

As a consequence, we suggest that, in the Commercial Union case, had the contents of

the news clipping been stored in an information system, rather than a paper file, so as to be

readily available to the human officers conducting the insurance transaction, the result should

have been different. The prohibitive cost of insisting on cross-checks being made of all paper

files before proceeding with any transaction without fear of legal consequences is obviously

significantly reduced if those files are held electronically and ready-to-hand to employees of the

company generally.

The example suggests that information systems that are mere accumulations of records

may not qualify as agents for attribution purposes. We suggest that the knowledge held by

artificial agents will only be attributed to a corporation to the extent that the agent permits ready

10 per Foster AJA at paragraph 73

34

access by other (human or artificial) agents to its contents. In this way, while the duty to

communicate is not necessary for the imputation of knowledge, the ability to communicate so as

to make information readily accessible to others – and to not just passively store information –

might well be. Further, this information held by the agent should be in a form that is facilitative

of its further use; information that is readily accessible but obfuscated in its form and content,

and hence not usable, should not be considered knowledge.

Considering the company as a knowing agent in its own right for a moment, paper files,

to which officers are not expected to have recourse in conducting particular transactions, could

be equated with the ‘dead’ information contained within, but not accessible to, an artificial

agent – such as information written on a hard disk, but not readily accessible to the user through

the operating system without deploying specialised software.

There is one crucial difference, however, between an artificial agent and an ordinary

information system: an artificial agent, at least one that is capable of satisfying our analysis of

knowledge set out above, is capable of knowledge, while an ordinary information system is

merely capable of storing information for access by other human and artificial agents. This

would have significance in a case where the corporate principal did not engage any other agents

at all, or any other agents whose function it was to query the artificial agent. Such knowledge

would be attributed to the corporate principal itself, by applying the ordinary rules of agency

attribution to the facts of the case, and the fact that no other agents of the company were

intended to query the artificial agent should make no difference.

In very particular cases, it is possible that knowledge might be held by an artificial agent

which is not intended to be shared with other agents of the corporation by reason of its

confidential nature. An example might be a medical records agent, designed to handle highly

35

confidential information – such as medical or psychiatric records. We have tentatively

concluded above that such information should not be attributed to the agent’s principal, as it is

not ready to hand for the principal in the usual way. We will investigate this topic further in

Chapter 5, where we will also investigate when information held by an artificial agent can be

aggregated with other information of the principal.

There are objections, of course, to the idea that knowledge held by artificial agents

should be considered to be attributed to the agent’s principal in the same way that knowledge

held by human agents can be so attributed. The main objection to our analysis is that an agency

approach is unnecessary, since the requisite results can be derived rather from one of two

simpler approaches. Firstly, that all information under the control of the principal should be

considered to be knowledge of the principal for legal purposes, and that an agency analysis

merely complicates the picture unnecessarily. Or, secondly, that an analysis of artificial agents

in terms of them acting as ‘current formal records’ of the company is sufficient, and no recourse

to the concept of agency is called for.

We believe that an agency analysis is preferable, for a number of reasons. Firstly, the

concept of the scope of authority is useful in distinguishing information known to an agent that

should be attributed to the principal from that which should not be. In particular, in situations

where the agent acts for two or more principals, or where the agent is in possession of

confidential information belonging to a third party, the rules of attribution of knowledge held by

agents can be of assistance in making a correct judgement as to whether to attribute the

knowledge or not. Such information can, in one sense, be said to be under the control of the

principal, but the law of agency affords more granularity to the analysis and provides more

detailed guidance in these situations. For similar reasons, an inquiry whether the agent acts as

36

the ‘current formal records’ of a corporation may lead to the wrong result. The capacity in

which an agent acts will help to distinguish what can be said to be known to the principal from

what cannot. Thirdly, the agency approach gives due weight to the special character of the agent

as itself capable of knowledge, and not merely as the repository of knowledge for others.

4. Conclusion

We have presented a philosophical analysis of knowledge attribution and suggested that the

courts could make use of this analysis when deciding whether to attribute knowledge held by

artificial agents to principals, such as corporations, employing those artificial agents to conduct

transactions on their behalf. In our discussion of Amazon.com, we did not bother to distinguish

between the corporation (a legal entity) and the software agents operated by the corporation. We

think that for the reasons mentioned in our analysis above, it can make sense to attribute

knowledge to both the artificial agents operated by the corporation and the corporation itself.

We also pointed out close and instructive parallels between the philosophical analysis and the

legal doctrine. We are hopeful that the first cases where legally salient information is known

only to artificial agents, and attributed to the corporations operating those agents on the basis of

the above-outlined principles, are only a short time away.

References

Alston, William. 1989. Epistemic Justification. Essays in the Theory of Knowledge. Ithaca

Cornell University Press.

———. 1993. The Reliability of Sense Perception. Ithaca: Cornell University Press.

Armstrong, D.M. 1973. Belief, Truth, and Knowledge. Cambridge: Cambridge University Press.

Armstrong, David. 1980. The Nature of Mind. St. Lucia, Queensland: University of Queensland Press.

37

Bloor, David. 1983. Wittgenstein: A Social Theory of Knowledge. New York: Columbia University Press.

Bowstead, William, and FMB Reynolds. 2001. Bowstead and Reynolds on Agency. 17th ed. London: Sweet and

Maxwell.

Cheney, D.L, and R.M Seyfarth. 1990. How Monkeys See the World. Chicago: University of Chicago Press.

Chisholm, Roderick. 1989. Theory of Knowledge. Englewood Cliffs: Prentice Hall.

Clark, Andy. 2003. Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford:

Oxford University Press.

DeMott, Deborah A. 2003. When is a Principal Charged with an Agent’s Knowledge? . Duke Journal of

Comparative and International Law 13:291-320.

Dretske, F. 1971. Conclusive Reasons. Australasian Journal of Philosophy 49.

Dretske, Fred. 1981. Knowledge and the Flow of Information. Cambridge: MIT Press.

———. 1981. The pragmatic dimension of knowledge. Philosophical Studies 40 (3):363-378.

———. 1985. Precis of Knowledge and the Flow of Information. In Naturalizing Epistemology, edited by H.

Kornblith. Cambridge: MIT Press.

Gettier, Edmund. 1963. Is Justified True Belief Knowledge? Analysis 23:121-123.

Goldman, Alvin. 1967. A Causal Theory of Knowing. Journal of Philosophy 64:335-372.

———. 1979. What is Justified Belief? In Justification and Knowledge, edited by G. S. Pappas. Dordrecht: Reidel.

Grice, H.P. 1975. Logic and Conversation. In Syntax and Semantics, 3: Speech Acts edited by P. Cole and J. L.

Morgan. New York City: Academic Press.

Hare, B., J. Call, and M. Tomasello. 2001. Do chimpanzees know what conspecifics know? Animal Behavior

61:139-151.

Kerr, Ian R. 1999. Providing for autonomous electronic devices in the uniform electronic commerce act. Paper read

at Uniform Law Conference of Canada.

Langevoort, Donald C. 2003. Agency law inside the corporation: problems of candor and knowledge. University of

Cincinnati Law Review 71:1187-1231.

Lehrer, Keith, and Thomas Paxson. 1969. Knowledge: Undefeated Justified True Belief. Journal of Philosophy

66:1-22.

Levi, Isaac, and Sidney Morgenbesser. 1964. Belief and Disposition. American Philosophical Quarterly 1 (3):221-

38

232.

Lewis, David. 1996. Elusive Knowledge Australasian Journal of Philosophy 74 (4):549-567.

Nozick, Robert. 1981. Philosophical Explanations. Cambridge: Harvard University Press.

Parikh, Rohit. 1995. Logical Omniscience. In Logic and Computational Complexity. Heidelberg: Springer-Verlag.

Povinelli, DJ, and S deBlois. 1992. Young children's (Homo sapiens) understanding of knowledge formation in

themselves and others. Journal of Comparative Psychology 106 (3):228-238.

Quine, Willard Van Orman. 1969. Ontological Relativity and Other Essays. New York City: Columbia University

Press.

Russell, Bertrand. 1984. Theory of Knowledge: The 1913 Manuscript. London: Allen and Unwin. Original edition,

1913.

Russell, Bruce. 2001. Epistemic and Moral Duty. In Knowledge, Truth, and Duty. Essays on Epistemic

Justification, Responsibility, and Virtue, edited by M. Steup. Oxford Oxford University Press.

Ryle, Gilbert. 2000. The Concept of Mind. Chicago: University of Chicago Press. Original edition, 1949.

Shope, Robert K. 1983. The Analysis of Knowing. A Decade of Research. Princeton: Princeton University Press.

Sosa, Ernest. 1997. Reflective Knowledge in the Best Circles. The Journal of Philosophy 96:410-430.

Steup, Matthias, and Ernest Sosa, eds. 2005. Contemporary Debates in Epistemology. Oxford: Blackwell.

Virányi, Zs., J. Topál, Á. Miklósi, and V. Csányi. 2006. A nonverbal test of knowledge attribution: a comparative

study on dogs and children. Animal Cognition 9 (1):1435-1448.

Watts, Peter. 2005. Imputed Knowledge in Agency Law: Knowledge Acquired Outside Mandate. NZ Law Review

3:307.