34
A Perspective on Truth Mark Weinstein Montclair State University There is hardly a more basic and pervasive tension in contemporary argument theory and informal logic than that between acceptability and truth. This is immediately evident in the work of Ralph Johnson and Tony Blair. The basic tension is found in the discussion of undefended premises in Logical Self-Defense. ‘It is reasonable to accept an undefended premise if it is generally known to be true, or at least represents knowledge shared, and know to be shared’ (Johnson and Blair, 1983, p. 48). The discussion is elaborated and truth plays more of a role in later editions. Acceptability ‘concerns the relationship of premises to the audience’ truth ‘concerns the relationship of the premises to the world’ (Johnson and Blair, 1994, p.76) and in addition moves the discussions of truth to the conclusion. 'The goal of many arguments is to establish that the way things are in the world shows the conclusion to be true and hence worthy of the audiences belief’ (ibid.). Truth and acceptability continue to play a central role in Johnson’s later work, Manifest Rationality (2000) where the contrasting views are looked at through the seminal recommendations of Hamblin, Govier, Biro-Siegel, Pinto and others. Despite a commitment to truth, now associated with the ‘illative core, (pp. 190-1), that is the argument per se, as distinguished from the ‘dialectical tier,’ the argument in the context of known challenges and concerns (pp. 164-5), Johnson remains poised between the two poles. Johnson sets the goal of argument to be ‘rational persuasion (p.160). This is manifested through giving ‘reasons and evidence.’ ‘Reasons are produced to justify the

A Perspective on Truth

Embed Size (px)

Citation preview

A Perspective on Truth

Mark WeinsteinMontclair State University

There is hardly a more basic and pervasive tension in contemporary argument

theory and informal logic than that between acceptability and truth. This is immediately

evident in the work of Ralph Johnson and Tony Blair. The basic tension is found in the

discussion of undefended premises in Logical Self-Defense. ‘It is reasonable to accept an

undefended premise if it is generally known to be true, or at least represents knowledge

shared, and know to be shared’ (Johnson and Blair, 1983, p. 48). The discussion is

elaborated and truth plays more of a role in later editions. Acceptability ‘concerns the

relationship of premises to the audience’ truth ‘concerns the relationship of the premises

to the world’ (Johnson and Blair, 1994, p.76) and in addition moves the discussions of

truth to the conclusion. 'The goal of many arguments is to establish that the way things

are in the world shows the conclusion to be true and hence worthy of the audiences

belief’ (ibid.).

Truth and acceptability continue to play a central role in Johnson’s later work,

Manifest Rationality (2000) where the contrasting views are looked at through the

seminal recommendations of Hamblin, Govier, Biro-Siegel, Pinto and others. Despite a

commitment to truth, now associated with the ‘illative core, (pp. 190-1), that is the

argument per se, as distinguished from the ‘dialectical tier,’ the argument in the context

of known challenges and concerns (pp. 164-5), Johnson remains poised between the two

poles. Johnson sets the goal of argument to be ‘rational persuasion (p.160). This is

manifested through giving ‘reasons and evidence.’ ‘Reasons are produced to justify the

target proposition (p. 160). ‘The requirements of manifest rationality makes it obligatory

that if I wish to persuade you of the truth or acceptability of some thesis-statement and

wish to do so within the dictates of rationality, recognizing your rationality then I must

use reasons’ (p.165). Distinguishing between the logical (‘premise-conclusion structure,’

p.160) and the dialectical concerns in the analysis of rational persuasion is an important

move, but it leaves the essential tension unresolved. As we shall see there are standard

objections to truth and acceptance as adequate to the normative assessment of argument.

And the objections apply, although with different emphasis, to both the illative core and

the dialectical tier.

In what follows I will revive some very basic concerns, look at the foundation of

the theory of truth in classical and modern logic and then move on the a recent attempt to

bypass truth and characterize acceptability in terms of presumption. Finally I will briefly

indicate my own attempt to grapple with truth through applied epistemology, that is, the

analysis of the norms as used in successful inquiry.

I. Issues of Truth and Acceptability.

The tension between acceptability and truth reflects the deeper tension between

logic, traditionally construed as engaged with truth or at least truth assignments and the

rhetorical tradition primarily engaged with persuasion. Attempts to resolve this tension

with notions of rational persuasion, or ideal interlocutors abound in the literature, but

remain unconvincing for a fundamental argument makes all such rhetorical strategies

questionable from the point of view of an epistemology that sees truth as an ideal. And by

‘truth’ I mean a family of notions, what in other contexts I have called 'truthlikeness'

(Weinstein, 2002) that is, true enough for an epistemic purpose, reflecting a range of

‘doxastic attitudes’ (Pinto, 2002) but still retaining an objective core that distinguishes it

from acceptance. As in most deep problems in philosophy the underlying issues is both

clear and well known in a variety of contexts. It is a version of the ‘open-question.’ For,

unless we use a mathematical notion that permits likelihoods to be calculated as in

Bayesian approaches, and no matter how construed or qualified, acceptability still

remains vulnerable to the question: It is (e.g. rationally) acceptable, but is it true (likely

enough etc)?

There are also fairly standard objections to truth (Johnson cites Hamblin, pp. 182-

4 and Pinto, pp. 278-9). The problem is that truths, not, known, believed or accepted by

interlocutors are unavailable as tools of argument, hence impotent. This explains the

move to ideal types for obviously what is needed is premises that are accepted and true

and conclusions that warrant acceptance because of some heir to the classic notion of

truth preservation. That is, supported by the premises at an appropriate level of epistemic

adequacy, given the epistemic virtue of the premises and the power of the argumentative

connection (the strength of the warrant in my view, Weinstein, forthcoming).

There is a similar argument in respect of epistemological variants. For no matter

how strong the belief, evidence, or central the commitment, the open question still

applies. This even applies to supposedly a priori truths. The irrationality of square root of

2, the availability of Non-Euclidean Geometry, even the equation between truth and

provability in arithmetic are all well known instances where a consensus about a priori

intuitions proved inadequate. This is not to mention even more obvious issues such as the

common confusion of the conditional with the biconditional or the counter-intuitiveness

of elementary theorems of logic, such as, ‘A implies B’ implies ‘A implies, C implies B’

for arbitrary sentence ‘C.’ Empirical truths fare no better, ‘for all evidence points to p’

and ‘p is false’ is all too evident from the history of science and common affairs, whence

the open question: ‘all evidence points to p, but is p true?’ is still in force. Similarly for ‘p

is most reasonable’ and other variants. Even given all of the evidence and carefully

orchestrated procedures as in jury trials, the open question applies. Only the convention

of trial by jury forces the equation of the best evidence and the truth. For any verdict, no

matter how binding or irrevocable the open question can be, and is often most tellingly,

raised.

But of course, truth is not to be identified with known as true (or any epistemic

variant). For an equally familiar, if more nuanced insight, cripples the ‘metaphysical’

notion of truth, that is truth in itself, for any notion of truth no matter how construed gets

us no further than an ‘epistemological’ notion of truth, that is truth as known as true. The

traditional association of the true with the knowable, as in Plato merely obscures the

difference. For even if the true is what can be truly known, at no point can we ascertain

whether any purported knowledge claim is truly knowledge, in that it is true. Since ‘p is

known to be true’ implies ‘p is true’ there is no sense is claiming to know ‘p’ unless we

already know ‘p is true.’

This antimony of sorts leads to two equally well know solutions, the search for an

epistemological foundation where truth is apparent to the noetic faculty, whether a priori

or perceptual, or, alternatively, as in Pierce’s view, ‘the ideal limit to which endless

investigations would tend’ (Hartshorne & Weiss, 1960, 5.565). Logicians and traditional

epistemologists have been seduced by the first. The new rhetoric can hope for no more

than the latter, for all model communities of interlocutors are merely communities of

interlocutors, and no matter how constituted there is good reason to believe that what

they think is tentative at best. Harvey Siegel is correct in regarding fallibilism as the most

responsible epistemological posture (Siegel, 1987). He is also right about the weakness of

relativism and his variant of Plato’s argument in the Protagoras is freely available against

all sorts of contextualist and communitarian visions, or so it seems. A problem this deep

requires a rehearsal of fundamentals.

Truth has been analyzed in three sorts of ways. The first version of

foundationalism, based on the primacy of mathematics in the thinking of philosophers as

in Plato, can be generalized as coherence. That is claims to truth must withstand a logical

test at a very high level of rigor. The paradigm for that was first geometry and in this

century arithmetic. The second version, based on the primacy of sense perception and the

apparent availability of the world as it appears can be generalized as correspondence, that

is, truth seen as a close relation between our beliefs and the world as it is availiable to us.

Correspondence was given additional appeal when in modern logic the availability of

antecedently know truths of arithmetic set the standard of truth in logical systems much

to the confusion of these rather distinct accounts of correspondence. As we will indicate

below correspondence in Tarskian constructions is not readily available to empiricist

foundationalism. The third view is based on the practical effectiveness of trial and error

reasoning and our ability to use both sense and reason to accommodate our purposes.

This is characterized as a pragmatic notion of truth. These are all deeply compelling

images and are often dialectically used against each other. But of course, the very

durability of the discussion points to the insight that each includes, for our concept of

reason requires that the truth be coherent. That is not inconsistent, or at least not

intractably inconsistent, and connected in the sense of supportive of inference, We don’t

want inconsistencies to cripple our ability to distinguish between good and bad inferences

(as in classical logic) and we want to be able to move from some of our assertions to

others with warrant. But equally powerful is the intuition that we want our assertions to

reflect the world as it is, for it is only by getting things right that we can assure success

over time—that is at least as long as the material conditions that support our descriptions

hold still. And so we search for ‘underlying regularities’ as the true picture of things.

This has caused enormous problems for logic since the search for a world to

confirm our thinking falls apart fairly quickly once we exhaust sentences of ‘the cat is on

the mat’ variety. This requires more argument and I will return to it shortly, but what

should be obvious is that understanding in any deep sense world requires complex

categories and webs of connections. This is already obvious in Aristotle, as we shall see.

But the original impulse to read principles, causal and other functional relationships

directly from experience falls apart in the face of scientific advance. But scientific

theories are not the only place where a world is described and causal and other sorts of

connections made. This raises the question of the relevance of ‘the cat is on the mat’

examples to a theory of truth. That is, can a truth and acceptability be based on

perceptions and conceptions so evident to common sense that, except for deviant cases,

their presumption is clear?

Wilfred Sellars (1963) distinguishes between the manifest and the scientific

image. The manifest image is the way the world appears to be to ordinary perception and

in regards of ordinary affairs. The scientific image is the world reconstructed through

theory and rigorous methodology whether mathematical, or more pertinently for our later

discussion, the natural sciences. Part of Sellars’ project was to block connecting the two

with some sort of reduction of science to sensory input, without which instrumentalism

and positivism are pointless. There is no point in reducing your theory to data if the

construction of the data is as questionable as your theory. Real science is a careful

balancing, sensitive to cases, of data and theory (Cartwright, 1983).

Since Sellars, the realization that sensory input is structured by conceptual

presuppositions, the stuff of cognitive psychology, (Gardner, 1987) reinforced by

knowledge of physiology and neurophysiology and its effect on perception and cognition

(Yantis, 2001; Rao et. al. 2002), militates against the sense of immediacy that

characterizes common sense empiricism. The realization that the world is known from

inference, that is, by drawing internal and external inferential connections, by building

causal and other explanatory models with an empirical base and predictive consequences,

leads us to the odd situation that we cannot compare our utterances to the world since the

world, as often as not, appears to us through our utterances. That is, as the result of

reasoned inquiry. The model of utterances on the one hand and the world on the other is

attractive only when we have independent access to the world, as in the perception of

obvious properties of medium sized objects and, of course in arithmetic. And we do

seem, even in the scientific image to have access to medium sized objects. The existence

of the complex neurological systems that give us perceptions and cognitions points to the

fact, as Pierce saw, of their general effectiveness, at last (as evolutionary biologists would

insist) if the demands made on the system are continuous with those that conditioned our

persistence as a species. That is, the ability to deal with the world of ordinary objects in

generally effective ways.

Both the scientific image and the manifest image may support claims to be true.

The manifest image comports with our ordinary understanding and although truth is often

hard to find, the truth for all practical purposes, or better, for some practical purpose, is

available for successful engagement with the affairs of daily life. This is evidence of

enormous practical utility. A common sense theory of truth seems satisfactory as a

pragmatic theory. The scientific image on the other hand reflects the technical progress

that is constitutive of human experience in the 20th century, and more important, offers an

image of connectedness and integration that is unparalleled in the history of human

affairs. Mathematics aside, the understanding of the physical world, particularly on the

molecular and atomic levels as applied to the physical and biological domains offers

detailed and coherent explanations of the broadest range of phenomena using a clear

definable set of concepts, tried in practice and refined as theory advances. The keystone

of the edifice, the Periodic Table of Elements, supports an increasingly well-articulated

and well-understood body of information that not only supports indefinitely many

practical enterprises but can be presented free of significant paradoxes, until we move to

the subatomic level. This is an extraordinary achievement, and points to the adequacy of

the enterprise as a harbinger of truth. It is clearly adequate on pragmatic grounds and the

standards of rigor for conceptual development and inference are among the highest

available to human beings engaged in knowledge a posteriori. But the problem of

correspondence seems intractable, since science must wait upon its own discoveries to

have a picture of the world against which its claims are to be ultimately judged. This calls

for a radical revision of the metamathematics of truth.

II. Logic, Inquiry and Truth

The notions of truth underlying the two giant contributions in the history of logic:

that of Aristotle, and that of the logicians preoccupied with the foundations of

mathematics in the early twentieth century-- show deep theoretical and even

metaphysical assumptions that make them suspect as the underlying theory of a logic

adequate to support the theory of argument as currently construed. That is, argument

seen as the rational core of ordinary and specialized discourse of the widest variety of

sorts. Such a theory of argument with a clear empirical and practical component cannot

assume the usefulness of underlying images of logic drawn from rather different

conceptions of how reason manifests itself in discourse.

As should be apparent, the notion of truth available from the study of mathematics

within standard formal languages suffers from its inapplicability to situations that do not

share three essential aspects of mathematics as logically represented: clarity of model

relations captured in exhaustive and exclusive extensional definitions, the identification

of truth as satisfaction within an available model (e.g. arithmetic), and logical necessity

as truth in all possible models. Without substantive argument to the contrary, it seems

obvious that the concepts used in practical argument are not readily reconstructed to fit

the mathematical ideal without significant distortion. Many concepts are not definable in

extensional terms, criteria for membership are not explicitly defined, there are overlaps

and ‘bushes’ in the tree structures that represent conceptual relations and membership

frequently cannot be decided in any effective way. An intended model is rarely available

antecedent to the inquiry at hand and necessity is frequently limited to a range of models,

as in physical necessity defined in terms of physical constraints on logically possible

models. But it is not merely the rigor of the mathematical ideal that renders it misleading.

Mathematical logic construes truth as a univocal property of statements. This obscures

the complexity with which truth functions in extra-mathematical contexts. This is clear in

Toulmin (1969). Argumentative support requires more than the truth of premises and

abstract rules of inference. Among the premises we need to distinguish the grounds (a

relevant basis in fact or other information to support the claim at the appropriate level of

abstraction) from the warrants (statements or rules sufficiently general in respect of the

ground and claim that support the inference). And most crucially, the backing, a context

of interpretation and understanding that sets standards for rigor, relations among claims,

grounds and warrants, and a domain of primary application which determines modality

commensurate with the strength of the warrants.

If we accept this much, it seems arguable that the mathematical logicians’ notion

of truth is irrelevant to most situations that require human beings to determine the facts of

the matter. Traditionally, this has been construed as the distinction between deductive and

inductive logic. But, given the central role of deduction in science and other empirical

procedures (Magnani, 2001), that line must be drawn elsewhere. But where? Without the

anchor of a priori necessity exemplified by the well-understood domain of arithmetic,

from where will we draw an alternative normative paradigm? Toulmin’s concern with

human understanding points away from the mathematical paradigm to the broader

concerns of inquiry and especially to science. But short of a disciplinary based relativism

of the sort that Siegel (1987) has shown to be inadequate for normative understanding,

the disciplines require the same sort of foundational grounding that was traditionally

sought in the foundations of arithmetic.

The traditional solution, both within Greek thought and in the modern era, is to

apply logic to the world as if it was a modified mathematical structure. This is most

obvious in the standard Boolean interpretation of syllogism, which identifies syllogistic

propositions with set-theoretical ones. But that is hopeless unless the predicates and

quantifiers retain vestiges of mathematical clarity, for example, for quantifiers a specified

domain and for predicates exhaustive and exclusive (ideally, extensional) definitions that

permit clear hierarchies of inclusion to be seen. And there is no reason to think ordinary

affairs or even much of science is amenable to such mathematical treatment. I see the

problem clearest in Aristotle; for it seems to me that his ideal was just that inquiry satisfy

logical demands more appropriate to mathematics.

James Herman Randall, in his classic exposition of Aristotle, offers a complex

view of the relationship between truth, logic and inquiry. (Randall, 1960) The to dioti--

the why of things, connects apparent truths, the peri ho, with explanatory frameworks,

through the archai of demonstration, that serve as ta prota, the first things-- a true

foundation for apparent truths. Although Aristotle was more 'postmodern' than many of

those that work in his tradition- the archai, after all, were subject matter specific- the

envisioning of archai as readily knowable, if not known, reflected a classic and

overarching optimism about knowledge. This enabled Aristotle to graft a determinate

logic onto the various indeterminacies inherent in much of inquiry.

As Randall puts it, '’Science’ episteme, is systematized ‘formalized’ reasoning; it

is demonstration, apodexsis, from archai ... [it] operates through language, logos; through

using language, logismos, in a certain connected fashion, through syllogismos' (p. 46).

Syllogismos points back to the basic constraint on nous that it see beyond the accidental

and the particular, that it deal with the essential, the ti esti, and so syllogism deals with

what all of a kind have in common.

Syllogistic reasoning within episteme deduces the particular from what all

particulars of the kind have in common, and in dialectic looks at the proposed archai or

endoxa through the strongest possible lens- counterexamples as understood in the

traditional sense of strict contradictories, systematized, then canonized as the square of

opposition.

The focus on episteme, on theoria places the bar high for those who would

propose archai. The 'inductive' epistemology of concept formation along with the noetic

interpretation of their apperception presupposes that human beings can know reality with

an immediacy that seems silly given the course of scientific discovery over the past

several centuries. Too much conceptual water has gone under the bridge to think that

concepts are to be seen clearly within percepts. Rather, the conceptual frameworks that

human beings have elaborated, modified and discarded have been multifarious and

extend far beyond the imaginative capabilities of Aristotelian views that take the

perceptually presented as representative of underlying realities. Once the enormous

difficulty of the task of finding the conceptual apparatus that will support a true picture of

reality is realized, Aristotle's demand that concepts hold true without exception becomes

a serious drag on inquiry. Yet it still prevails, built into the very meaning of logic as

used.

The magnificent achievement of Russell and Tarski offered a model for

understanding logical inference and offers a structure open to almost indefinite

elaboration- quantification theory- that congruent with much of syllogism, offered a

clarity of understanding that surpassed anything dreamt of by centuries of logicians. The

Aristotelian core remained, now rethought in terms of extensional interpretations of

function symbols that offered a new grounding for the all-or-nothing account of argument

built into the square of opposition. The Boolean interpretation of Aristotle's quantifiers

retained the high demand that universal claims are to be rejected in light of a single

counter-instance, as did the modern semantics of models within which a natural theory of

truth was to be found. Mathematizing the clear intuition of correspondence, Tarski's

theory of truth gives the stability needed to yield vast areas of mathematics and even

offered some precious, but few, axiomatizations of physical theory. The price was that

the truth was relativized to models, yet there was no reason to think that any of the

models in use in science were true. This remark requires clarification.

Since the optimistic days in Greece when the theories of inquiry could draw upon

few real examples, the claim that archai are "noused" from particulars with ease seems a

historical curiosity, irrelevant to human inquiry. For the history of human inquiry in the

sciences showed that the identification of archai is no easy thing. Centuries of scientific

advance have shown the utility of all sorts of truish or even downright false models of

phenomena. Concepts, and the laws, generalizations and principles that cashed them out

into claims, have shown themselves to be mere approximations to a receding reality. As

complex connections among concepts, and underlying explanatory frames, have

characterized successful inquiry, truth in any absolute sense becomes less of an issue.

The issue is, rather, likelihoods, theoretic fecundity, interesting plausibility, etc. The

operational concepts behind these- confirmation and disconfirmation- in the once

standard philosophical reading (Hempel and the rest), retained the absolutist core that

Aristotelian logic exemplifies, amplified by quantification theory. Even Popper saw

falsification as instance disconfirmation.

Much work since then has offered a more textured view; I think here of Lakatos

(1970) and Laudan (1977). Students of science no longer see the choice as between

deductivism- as standardly construed as an account for scientific explanation- and some

Feyerabendian alogical procedure that disregards truth (Feyerabend, 1975). Students of

science see, rather, a more nuanced relation between theory building and modification.

Argument theorists and informal logicians should be thrilled at this result for it

opens the door for what they do best: the analysis of complex and powerful arguments.

But, in general they have not walked through that door. Rather the concern for so-called

ordinary argumentation as the area of concern coupled with the professional commitment

to undergraduate instruction by informal logicians have kept most students of the subject

clearly focused on truth in the most obvious and ordinary sense. That is truth as

conformity with the facts of the matter as are manifest based on ordinary experience and

analyses in standard ways. ‘The cat on the mat’ is true, if there is a cat on the mat. This

presupposes two things that seem not worth questioning if logic is a practical pursuit. The

first is the we can readily ascertain whether the cat is on the mat, and that the predicates

and relations in the sentence are adequate conception tools for supporting plausible

inferences based on truth of the sentence. So since cats are felines, there is a feline on the

mat, and etc. The plausibility of such a common sense stand has been rarely argued for,

rather, taken as obvious.

III. Commonsense Foundationalism

James Freeman in Acceptable Premises (2005) reflects what can easily be seen as

the founding intuition of informal logic. That is, that a normative account of argument

can be developed focusing on the realm in which the overwhelming majority of

arguments occur that is non-specialized contexts in ordinary life, and using no more

technical apparatus than is readily available to an educated person. It is this core that

supports its vaunted utility as the basis for critical thinking and other good things.

Freeman draws heavily upon this tradition citing Thomas Reid more than any

other single author. Like Pierce and Plantinga, Freeman sees the efficiency of

epistemologically relevant mental functions as based on a naturalist account of their

necessity for successful human functioning (planning, ordinary problem solving and the

like). But genetic speculations aside, the essential nature of our faculties, reasoning,

sense, memory and the like supports Freeman’s acceptance of what he calls

‘commonsense foundationalism,’ which he sees as furnish the rejection of ‘skepticism’

(Freeman, 2002, pp. 367ff.).

Freeman combines a logical concept ‘presumption,’ familiar in discussions of

premise acceptability, with a concept he gets from Plantinga, 'belief generating

mechanisms.’ This gives him his analysis, stated boldly: a statement is acceptable as a

premise iff there is a presumption in its favor. (p. 20). And it has presumption in its favor

when it is the result of a suitable belief-generating mechanism, with appropriate hedges

about challenges, malfunctions and utility (p. 42ff). “We shall be arguing that the

principles of presumption connect beliefs with the sources that generate those beliefs.

“’Consider the source’ could be our motto for determining presumption” (p. 44).

Belief-generating mechanisms are of a variety of sorts. These psycho/social

constructs are presented in what might be seen as a philosophical anthropology, that is, a

theory of persons seen in their most obvious light. Belief-generating mechanisms need to

be adequate to the four-fold analysis of statements: analytic, descriptive, interpretative

and evaluative (pp. 97ff); and they need to engage with three sorts of beliefs: basic,

inferred and received (p. 109). Descriptions, for example, rely on the belief-generating

mechanisms of perception, which includes perception of qualities, natural and learned

signs, introspection, and memory (pp. 124ff). Perceptions are of three sorts, physical,

personal and institutional. Institutional perceptions are presented on the model of

“learned constitutive rules” (p. 136). This last is crucial for the modern condition: once

mastered, systems of cognitive organization are manifested through mediated perception

and enormously increase the range and relevance of sense perceptions, natural signs, and

classifications. How far the notion of constitutive rule takes us into this broad and

fascinating realm remains to be seen.

Whatever concerns are to be raised, however, we have to grant Freeman’s main

thesis. That is, we can account for many of our acceptable premises by virtue of their

genesis. For if, as seems obvious upon reflection, we argue often and argue well on

countless occasions, it should come as no surprise that the various mechanisms by which

we come to our premises can be articulated in defensible ways. We should grant

Freeman’s point immediately. There are mental (and social) structures of many sorts that

are reliable as the basis for judgments ranging as Freeman sees from the logical to the

evaluative and including essentially perceptual judgments and modest generalizations

based on memory and other aspects of common sense. And of course, judgments that rely

on the testimony or expertise of others. All of the kinds of belief generators have clear

instances with presumptive status in contexts that permit easy resolution. As Freeman

shows by examples, there are contexts for each one of them that yield acceptable

premises.

The key to the adequacy of belief generating mechanisms is that they are reliable.

We can begin our discussion of Freeman by immediately conceding that if the target is

radical skepticism, Freeman has won the day. We just accept as obvious that we argue

from acceptable premises all of the time, because in whatever relevant sense of

mechanism, there are things about us and about how we operate epistemically, that, for

all practical and many theoretical purposes, work just fine in enumerable instances.

The issue becomes interesting for me when there are questions to be asked. Although I

will look at the three most basic belief generating mechanisms, a prior intuition,

individual reports (based on sense perception) and memories, the challenges I will raise

will be readily seen to apply even more severely to the more complex ‘mechanisms’

including institutional intuitions and other intuitions that support causal and other general

claims.

Freeman asserts ‘some premises are straightforwardly acceptable as basic

premises without argument… However suppose one is faced with a ‘hard ‘case… Here

the requirement is to justify the judgment that a particular premise is or is not acceptable

as a basic premise…we call making such a determination an exercise in epistemic

casuistry’ (p. 319). For a priori intuition, Freeman requires that it certify a basic premise

as both true and necessarily true (p. 323). The issue as Freeman sees it requires a

challenge, that is unless a challenger is aware of improper functioning, the presumption

for the reliability of her faculty of a priori intuition remains as does the presumption for

the statements for which it vouches’ (ibid). The decision is made more complex because

of the possibility of ‘pragmatic consideration’ that is “that cost of accepting the claim if

mistaken is higher then the expected cost of gaining further evidence’ (ibid and

elsewhere). This caveat is included in all of the discussions of belief generating

mechanisms, but will be sidestepped here.

Freeman’s account of a priori intuition, like his other forms of belief generating

mechanisms requires that it not be malfunctioning. With sensory intuition this is more

readily fleshed out. A prior intuition is another thing entirely. When does an a priori

intuition malfunction? Is this the same as logical error? But then identifying

malfunctioning intuitions depends upon a prior commitment to logical adequacy. This is

of course what we have available to us, we call it 'logic.' And although in dispute in areas,

the basic outline is available in logical theory. But one does not comprehend logic theory

by intuition alone. One must understand logic, that is, the correctness of the intuition is a

function of the informed intuitions of logicians and others who study the field. This of

course is a far cry from what various native abilities permit the job to get off the ground.

And where is this basis? Students of logic who fail to get modes ponens may certainly be

seen to have a failure of logical intuition, but what of students who are skeptical of the

various complex logically true statements typified by tautologies such as ‘if A then, If B

then A.’ The idea of a ‘malady’ of the a priori intuition presumably could be exemplified

by a variety of examples, some rather simple and others quite deep. Simple cases include

students who tend to confuse conditional with biconditionals, or better, someone who

fails the Wason test, that is, not taking into account FT instances of conditionals when

checking all cases.

These are telling examples; many sorts of people fail at identifying the a priori

status of such items on many sorts of occasions, including those trained in logic. It

frequently has to be clearly explained for even people with experience in the field. So

clearly, it is not the intuitive nature of the underlying logic that is at stake. Rather, and

obviously, it is the underlying coherence of the theoretic understanding of logic that

marks the error. The acceptability flows not from its genesis in a priori intuition but from

its genesis (and constant reconstruction) in logical theory. And these are quite different

things, for the status of the later is, as in all theory, provisional and open to the advance

of inquiry, and so its birthright as an intuition is at best the beginning of the story

Although we may ultimately rely on something like a priori intuition, it is

deployed in conjunction with an apparatus (the 'institution' of first order logic), which in

this case is fairly clear, and which includes a meta-theory that permits of the deepest

intuitions, both obvious and surprising to be expressed, discovered and abandoned. The

problems of completeness of sub theories of first order logic; the equivalence of

alternative systems of proof, not to mention real problems for intuition such as Russell’s

paradoxes, Lowenheim-Skolem and Gödel incompleteness all play havoc with our

intuitions, and logic is richer for the havoc they play. For all of these test intuition by the

complex constructions of logical inquiry, that even if ultimately ‘intuitive’ to those in the

know, remain far from the attempts to grapple with logical inference that we find in

students and in the everyday application for even propositional arguments. This is seen in

a priori sciences other than logic as well. The notorious problem of the square root of 2,

the well known story of Hobbes and his rejection of a counter-intuitive theorem in

Geometry, Cantor’s problem and many others all point to the fundamental irrelevance of

strong intuitions in the face of theoretic advance. That is not to deny that there are

necessary a priori intuitions, failure to get modes ponens stops logic in its tracks, it is to

say that which those are remains unavailable until the advance of inquiry, which while

using these very intuitions, see them as defeasible as inquiry progresses. This does not

result in a challenge to the notion of a priori intuition, but rather makes any one of them

suspect. Such fallibilism is generally healthy, but it precludes the sort of generative story

that Freeman tells from being more than the beginning of the story. For me the story gets

interesting when we start to talk about revision of our intuition. That is, when we engage

with inference. Freeman draws the line in roughly the place I do and sees inference as

another issue. But my point is that being acceptable as a premise ultimately relies on

inference, although all inferences do start with some, putatively acceptable premises. And

so for me the epistemological interest of presumption is not when it succeeds but when it

fails

This can be seen easily in his next class of statements kinds, descriptions.

Freeman construes these as sense perceptual but sees their scope to extend to the

identification of summary and even non-projective generalizations (pp. 126-7 and pp.

345-6). Freeman offers a similar account here as well. He begins by asserting a

presumption for first-person reports of perceptions unless the challenger is ‘aware of

evidence that her perceptual mechanism is not functioning properly or that the

environment in which this perception is occurring is anomalous’ (p.326). Again, if his

point is there are perceptions that have presumption, he is correct. But how far does this

point take us? Again, we look at the most basic case. Visual perception is both highly

reliable and a mechanism in the clearest possible sense. The physiology of sight is well

understood including the neurological basis in the brain. And so malfunction is easily

diagnosed and accounted for in terms of the mechanism. That, of course, is not what

Freeman has in mind. Rather it is the functioning of vision that is the ‘mechanism’ he is

interested in. We know we are malfunctioning when we have issues, and when we have

issues we go to the eye doctor. Short of very simple tests, response of the retina to light,

eye charts and the like, the identification and remediation of a visual malfunction is a

complex combination of phenomenology (what you say is taken seriously) long

experience with coherent symptoms, and focused and frequently efficient choices of test-

sequences as when the Doctor changes lens back and forth asking, ‘Which is clearer, this

or this?’ But all of this, even the eye charts, rely not on the quality of the visual intuition

of the patient, but on this intuition in combination with a long experience, codified in

'institutional' (professional practice) the technology the supports the examination and

underlying understanding of how deformities in the visual mechanism is to be

compensated for by choice of lens shape. And so again, whatever the presumption, that

for example, a first person report is correct, it is the interaction with a mode of inquiry

that settles the case. Having seen well in the past is no argument against needing glasses,

although it is a sufficiently reliable index of function that new patients frequently

complain when confronted with the need to remediate. The same is clearly true for all

sensory reports. We don’t have to have a history auditory delusions for our reports to be

delusional. The reports just have to be sincere and out of sync with the understanding of

others. The distinction between perceptions and, for example, dreams is not vividness but

continuity and coherence. Eyewitness testimony relies on corroboration not eye tests.

Another major source of beliefs is memory and of course Freeman is correct we

remember all sorts of thing and rely upon them extensively: ‘Memory, as long as what is

remembered is distinct and not vague, again is a presumptively reliable belief-generating

mechanism’ (p. 329). But when we look at memory we notice first that much of it is

dispositional in the sense of knowing how, and so the issue of functioning is clearly tied

to performance rather then some internal vividness or other phenomenological marker (p.

141). For propositional memory it would be, perhaps, rude to question someone’s vivid

memory of events etc. except when they prove incoherent with another narrative. But

politeness aside and looking at the phenomena in general, we now know that whether or

not accompanied by phenomenological states that support conviction, even within the

agent, memories are tied to coherent networks of other memories, peculiarly connected,

with all sorts of other affective and classificatory bundles in the mechanism that supports

them, the knowing brain. This is manifest in behavior in a well-known ways and accounts

for memory bias of all sorts. Memories that enter into public narrative are even more

fraught with difficulty as all sorts of biasing choices of centrality and focus distorts

memories in ways that are well known within cognitive psychology. This alters the

perspective on what makes memory reliable. To ask if someone remembers (except in the

context of first-person interviews that do no more than report opinions) is to engage with

an inquiry into the memories’ surround. Whether internally in terms of introspective

narratives or more importantly externally in reports of first-person experiences for the

purpose of offering useful information, our acceptable memories are those that can serve

as premises because of their coherence with other things we remember, which in turn are

judged by their coherence and et cetera.

I won’t go through Freeman’s other belief mechanisms for I believe my point to

be made. The remaining belief generating mechanism all having to do with generalities

including ‘subjunctives’ that support counterfactuals whether empirical or ‘institutional,’

that is, codified by experts in light of the best evidence and firmest opinions (pp. 171ff.

and pp.347ff). I leave it to the reader to provide examples of similar complaints to those

just raised, which I believe to be all too available in the history of science and in common

affairs. Generalities of whatever sort rely on their persistence as inquiry advances, the

founding intuition rarely even affords a clue as to their reliability. Even so truncated a

discussion gives us a clue as to another way of looking at things, moving from the

genesis of a belief, to how it fares when scrutinized in light of various doxastic ends.

The problems with Freeman’s account offer an interesting insight into the

relationship between the manifest and scientific images. For to check whether a claim in

the manifest image is well grounded in the belief generating mechanism we turn, when

we can, to the scientific image. But in light of the problems identified, how can we define

truth in the scientific image in a manner that satisfies the need for correspondence.

IV. Truth in the Scientific Image.

If you ask a sane moderately informed person what the world is really made of in

just the general sense that Greeks might have asked, the answer is something like

"molecules and atoms." Let's start there. At the core of modern science stands the

Periodic Table. I take as an assumption that if anything is worth considering true of all of

the panoply of modern understanding of the physical world it is that. But why? And what

will learn by changing the paradigm?

The periodic Table stands at the center of an amazingly complex joining of

theories at levels of analysis from the most ordinary chemical formula in application to

industrial needs, to the most recondite-- particle physics. The range of these ordinary

things-- electrical appliances to bridges, has been interpreted in sequences of models,

developed over time, each of these responding to a particular need or area of scientific

research. Examples are no more than a listing of scientific understanding of various sorts:

the understanding of dyes that prompted organic chemistry in Germany in the late 19th

century; the smelting of metals and the improvement of metal kinds, e.g. steel; the work

of Faraday in early electric theory; the development of the transistors and the exploration

of semi-conductors. This multitude of specific projects, all linked empirically to clear

operational concepts, has been unified around two massive theoretic complexes: particle

physics and electromagnetic wave theory. The deep work in science is to unify theories.

The mundane work in science is to clarify and extend each of the various applications

and clarify and modify existing empirical laws, and this in two fashions. First, by

offering better interpretations of empirical and practical understanding as the underlying

theories of their structure becomes clearer. Second, by strengthening connections

between underlying theories so as to move towards a more coherent and comprehensive

image of physical reality, as underlying theories are modified and changed. On my

reading of physical chemistry the Periodic Table is the lynch pin, in that is gives us, back

to Aristotle again, the basic physical kinds.

We need a theory of truth that will support this. And, surprisingly perhaps, I think

the image is just what current argumentation theorists need as well. Since argument is

not frozen logical relations but interactive and ongoing, we need a logic that supports

dialectical advance. That is, we need a dynamics of change rather than a statics of proof.

We need to see how we reason across different families of considerations, different lines

of argument, that add plausibility, and affect likelihoods. Arguments are structured arrays

of reasons brought forward; that is, argument pervades across an indefinite range of

claims and counter-claims. These claims are complex and weigh differently as

considerations, depending on how the argument moves. So we need a notion of truth that

connects bundles of concerns-- lines of argument, and to different degrees.

Quantification theory was developed in order to solve deep problems in the

foundations of mathematics. And the standard interpretation of mathematics in arithmetic

models proved to be a snare. What was provable is that any theory that had a model, had

one in the integers, and models in arithmetic became the source for the deepest work in

quantification theory. But the naturalness, even ubiquity of a particular model kind did

not alter that fact that truth in a model could only be identified with truth when a model

of ontological significance was preferred. This is the lesson of the Lowenheim-Skolem

Paradox. The real numbers, if consistent have a model in the natural numbers, but given

Cantor, that model cannot possibly be the intended model for the real numbers (Putnam,

1983). Being true in a model is an essential concept. Without it we have no logic. But

the identification of truth in a model with truth just reflects the metaphysical and

epistemological biases of the tradition in the light of the univocal character of

mathematics as it was understood then. If I am right, it is not truth in a model that is the

central issue for truth, but rather the choice of models that represent realities. And this

cannot be identified with truth in a model for it requires that models be compared.

To look at it another way, if we replace mathematics with science as the central

paradigm from which a logical theory of truth is to be drawn, the identification of truth

with truth in a model is severed. For there is no model in which scientific theories are

proved true. Rather science shows interlocking models connected in weird and

wonderful ways. The reduction rules between theories are enormously difficult to find

and invariably include all sorts of assumptions not tied to the reduced theory itself. The

classic example is the reduction of the gas laws to statistical mechanics (Nagel, 1961).

The assumption of equiprobability in regions is just silly as an assumption about real

gases, but the assumption permits inferences to be drawn that explain the behavior of

gases in a deeply mathematical way, and in a way that gets connected to the developing

atomic theory at the time, much to the advantage of theoretical understanding and

practical application.

What are the lessons for the theory of truth? We need to get rid of the univocal

image of truth-- that is truth within a model, and replace it with the flexibility that

modalities both require and support, that is truth across models. We need the

metatheoretic subtlety to give mathematical content to likelihoods and plausibilities, a

theory of the logic of argument must address the range of moves that both ordinary and

scientific discourse permits as we qualify and modify in light of countervailing

considerations.

V. A Model of Emerging Truth

In a recent publication (Weinstein 2002) I presented a precise metamathematical

construction based on the paradigm of physical chemistry that defines a notion of truth as

the result of inquiry. This requires that truth be identified with a model that emerges from

inquiry rather than one that is antecedent to it and so calls for clear criteria for model

choice. The connection I will make between logic and inquiry rests on my conjecture that

models of key logical concepts can be drawn from the structure of inquiry in physical

chemistry, seen as the prototype for disciplined inquiry that yields knowledge at the

highest level of epistemological warrant consistent with its a posteriori nature. If the

model is noetically compelling the further task is to see whether it may serve as a

metaphor for analogous images of truth and entailment in less epistemologically

demanding domains, ultimately serving as the basis for the general theory of truth in

argument and a consequent theory of entailment.

In the case envisioned, as in much of inquiry, what is required is an account of the

dynamics of propositions seen as interconnected by various relationships of support,

which both reflect and afford estimates of likelihoods, estimations of vulnerability to

challenge in light of competing positions, degree of relevance to the issues of hand as a

function of consequences across the field of commitments, and so on. Logic adequate to

inquiry must be sensitive enough to take practical and theoretic account of such a range

of considerations.

This sets the task. For participants in the discipline the level and kind of support

are more or less apparent in the claims made and challenges refuted. Students of a subject

matter acquire the sense of familiarity that supports such complex judgments as the result

of long study and assimilation into a community of argument. It is for the student of the

logic of argument to make the underlying structure of these complexes transparent both

in their functioning and in their noetic plausibility. And this transcends description. It

requires a normative account that captures what the description contributes in a noetically

transparent manner. This seems to me to require changes in the logical foundation.

An immediate problem for an emergent theory of truth is to offer an account of

stability sufficient to meet the test of non-relativism while admitting the evolutionary

nature of truth, an account of truth that has the robustness typical of the standard account,

while permitting an image of truth far different from that envisioned heretofore. In the

standard account the model, as in arithmetic, is available independent of the inquiry. If

we take physical chemistry as the paradigm, the model against which truth is to be

ascertained emerges from inquiry (Weinstein, 2002). The scientist must wait upon

science to see if his or her conjectures are true.

In contrast to the mathematical, truth based on the paradigm of mature physical

chemistry, like most of physical science, requires ambiguity in evolving model relations.

Both relations within models and relations among models permit approximations, and it

is the history of these approximations that determine the progressive nature of an inquiry.

Truth, in the final analysis, will be identified with the progressive appearance of a model

that deserves to be chosen. So both the intuitions of correspondence and coherence are

saved. The ultimate model emerges as a function of increased coherence and it stands as

an ideal object against which correspondence could be ascertained. It is, as in Pierce’s

view, ‘the ideal limit to which endless investigations would tend’ (Hartshorne & Weiss,

1960, 5.565). It is the substance of how judgments of epistemic adequacy are made

antecedent to the truth predicate being defined that is the main contribution of the

construction, the Model of Emerging Truth (MET). Finally, in place of strict implication

contrasted with induction in its various senses, MET permits degrees of necessity

reflective of the extent of model relations, that is, it permits inferences within models to

be reassessed in terms of the depth and breadth of the field of reducing theories from

which models are obtained. That is to say, MET yields a theory of entailment that permits

of degree (Weinstein, forthcoming).

MET has at its core the specification of two different sorts of functions. First,

fairly standard functions that map from a theory, construed as a coherent and explanatory

set of sentences, onto models. That is, sentences that describe events or that offer

generalizations as explanations of these events are assigned objects, relations or ordered-

relations in a defined domain, which constitutes what the theory may be taken to be

about. Second, a much more powerful set of functions map from other theories onto the

theory, thereby enormously enriching the evidentiary base and furnishing a

reinterpretation now construed in relation to a broader domain. That is to say, a theory

may have its domain reinterpreted when its descriptions and explanations are seen to be

instances of some broader and more encompassing domain, as in seeing chemical

processes to be the result of molecular interactions. This is the insight that reflects the

choice of a physical science as the governing paradigm. Mature physical science

reconstrues experimental evidence, laws and theories in the light of higher-order, more

abstract, theories, which unify heretofore-independent domains of physical inquiry. These

unifications, or ‘reductions’, offer a massive reevaluation of evidentiary strength and

theoretic likelihood. It is the weight of such reconstruals in identifying the ideal domain

that grounds the truth predicate that MET attempts to capture. MET shows how we find

out what our theories are really about, ontology in a sense that resonates with Pierce as

alluded to above.

Mature physical science is also characterized by the open textures of its models

and the approximations within which surrogates for deductions occur (in the standard

account idealizations and other simplifications). The construction attempts to make sense

of the need for approximations and other divergences among models at different levels of

analysis and articulation by offering intuitive criteria for assessing the epistemic function

of the approximation in light of emerging data and the theoretic surround.

The key contribution of MET is that it enables us to construe epistemic adequacy

as a function of theoretic depth and the increase of explanatory adequacy as inquiry

progresses, rather than, as in standard accounts, as conformity to pre-existing models or

predicted outcomes. This changes the logical structure of truth and entailment as

compared to the arithmetic ideal and to the positivist accounts that take, for example,

models of data as fulfilling an analogous role to models in arithmetic in the standard

account. That is, they both serve as a template against which a claim is evaluated. In the

once standard account of scientific inquiry, the model that yields confirmation is

available prior to the inquiry; the relation between theory and data is a function mapping

expectations onto outcomes. But, as epistemic adequacy requires something more. It

requires concern with the epistemic context, that is the body of information deemed

relevant on an occasion of argumentation. MET includes a function that maps from a

deep explanatory base onto the theories upon which expectations are based. This would

allow, among other things, choosing between alternative theories even where

expectations converge, and, with an elaborated metric, grounding the assignment of prior

probabilities and other estimations of likelihood.

The basic idea can be expressed without the mathematics. What the mathematics

shows is that these notions can be given precise content and so are not to be scorned on

the grounds of vacuity.

First, the crucial empirical dimension, for this is science after all. There is a set of

privileged models: empirical models of the data (Suppes, 1967). What makes science

empirical is a constraint that all models have connections with empirical models.

Second, for models at any level short of the highest there may be found higher

level models. So for first level models of the data, these data are joined through a more

theoretical model. Theoretic models take their epistemic force first from the empirical

models that they join, and then, and more importantly, from the additional empirical

models that result from the theoretic joining in excess of the initial empirical base of the

models joined. This is captured by the notion of a ‘model branch’ a sequence of theories

and their models furnishing warrants within and among theories

Truthlikeness is defined in terms of considerations such as:

1. The increase or decrease in the complexity of particular models over time. This

is definable in terms of an intended model and a sequenced of increasingly adequate

approximations, perhaps models of data. For example, a series of measurements moving

towards a theoretic value as instrumentation and our understanding increases.

2. The depth with which any model is supported by other models. This is

definable in terms of increases in the length of model branches as reducing theories are

discovered (more abstract and general theoretic models are available, as in moving from

physiology, to cell physiology to biochemistry).

3. The breadth of the array, the horizontal width of reduced theories under a single

reducer. Branches are rarely linear, rather reducing theories frequently expand to capture

independently known phenomena. Here the paradigm of early chemistry is illuminating.

Starting with the most basic notions of underlying substrates (frequently mistaken) the

evolving molecular model quickly covers the range of chemical processes across the

range of substances of all sorts. An obvious and unprecedented human achievement.

Physical chemistry moving to a higher level captures all this and much else besides

(Langford and Beebe, 1969).

Truth is then defined as an ideal limit on likelihood, but a consequence of the

structure finally gives a sense of correspondence. For a reducing theory donate models to

the theories it reduces; chemical substances are seen as molecules and their properties

explained in a unified manner as the result of molecular interactions. (The stuff of

Chemistry 101.) As theories are refined theses model become more adequate to the range

of phenomenon both in terms of their own coherence and in terms of the adequacy with

which reduced theories are accounted for. This both limits the models of the reducing

theories, and increases their persistence of such models across the array. Reduced

theories donate models (as in semantic entailment). That is, over time we can define a set

of persistent models. Models that are available to more and more of what we see in the

world. This then enables us to (fallibly and tentatively) define a model as ontologically

significant. The emergent model is then assigned in standard ways to predicates in

theories as in Tarski and we have come full circle.

And as a bonus we now have a model of truth where truth is shown by acceptance

over time (remember all the theories and models are selected by inquirers as they work

on the basis of evidence and argument) but what is accepted ultimately, is accepted

because it deserves to be accepted in terms of its coherence, its pragmatic utility, and in

light of its correspondence to the best explanation possible. But is it true?

References:

Cartright, Nancy: 1983, How the Laws of Physics Lie. Oxford: Oxford UP

Gardner, Howard: 1987, The Mind’s New Science. New York: Basic Books.

Feyerabend, Paul: 1975, Against Method. London: New Left Books.

Hartshorne, Charles and P. Weiss: 1960, The Collected Papers of Charles Sanders Pierce. Cambridge, MA: Harvard University Press.

Johnson, Ralph: 2000, Manifest Rationality. Mahwah, NJ: Lawrence Erlbaum.

Johnson, Ralph and Blair, J. A: 1983, Logical Self Defense. Toronto: McGraw-Hill Ryerson.

Johnson, Ralph and Blair, J. A: 1994, Logical Self Defense. New York: McGraw-Hill

Lakatos, Imre: 1970, ‘Falsification and the Methodology of Scientific Research Programs,’ in I. Lakatos and A. Musgrave (eds.). Criticism and the Growth of Knowledge. Cambridge: Cambridge University Press.

Langford, Cooper and R. Beebe: 1969, The Development of Chemical Principles. Reading, MA: Addison-Wesley.

Laudan, Larry: 1977, Progress and its Problems. Berkeley, CA: University of California Press.

Magnani, Lorenzo: 2001, Abduction, Reason and Science. New York: Kluwer.

Nagel, Ernest: 1961, The Structure of Science. New York: Harcourt, Brace and World.

Pinto, Robert: 2001, Argument, Inference and Dialectic. Dordrecht: Kluwer..

Putman, Hilary: 1983, Realism and Reason: Philosophical Papers, Vol. 3. Cambridge: Cambridge University Press

Randall, John: 1960, Aristotle. New York: Columbia University Press.

Rao, Rajesh, B. Olshausen, M. Lewicki (eds.): 2002, Probabilistic Models of the Brain: Perception and Neural Function. Cambridge MA: MIT

Sellars, Wilfred: 1963, Science, Perception and Reality. London: Routledge & Kegan Paul.

Siegel, Harvey: 1987, Relativism Refuted. Dordrecht, Holland: D. Reidel.

Suppes, Patrick: 1969, Studies in Methodology and Foundations of Science. New York: Humanities Press

Toulmin, Stephen: 1969, The Uses of Argument. Cambridge: Cambridge University Press.

Weinstein, Mark: 2002, ‘Exemplifying an internal realist model of truth.’ Philosophica 69, 20-49.

Weinstein, Mark: (forthcoming), ‘A Metamathematical Extension of the Toulmin Agenda.’

Yantis, Steven: 2001, Key Readings in Visual Perception: Essential Readings Philadelphia, PA: Psychology Press