29
DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal. 1 SPECIFYING SPECIFICATION Norbert Paulo As late as 1984, five years after the first edition of the seminal Principles of Biomedical Ethics appeared, Tom Beauchamp lamented that applied ethics is not taken seriously as a distinct field of philosophy. In order to change that attitude he argued for effacing the distinction between applied and classical ethics. After all, philosophers of applied ethics do the same as all other philosophers: they analyze concepts, use certain strategies to test or justify beliefs, and explicate hidden premises in arguments (Beauchamp 1984). Since 1984, applied ethics has grown progressively within philosophy. This is especially true of bioethics, which is viewed as the initial field of more practical ethical accounts. New medical treatments emerged in the 1960s, posing questions that were hard to answer with the abstract classical ethical theories like Kantianism and utilitarianism. 1 Just consider the case of a newborn diagnosed with the Edwards syndrome. Approximately 50% of infants suffering from this chromosomal disorder die within 2 months, merely 10% survive 1 year or longer. In this case, further complications made oral nutrition impossible. The parents requested that the physician withhold all available life support measures for their child. Should the physician respect their wish? 2 It is very hard to answer such hard questions with very abstract principles such as the Kantian categorical imperative or the utility principle. Starting from such abstract principles, applied ethicists developed more concrete, context- sensitive principles, such as the four principles nowadays routinely invoked in bioethics (respect for autonomy, nonmaleficence, beneficence, and justice); these mid-level principles are meant to be more suitable to deal with particular ethical problems. By now, we face nearly the opposite problem from Beauchamp’s almost 30 years ago. Today, we question the relevance of classical ethics in resolving ethical problems in particular cases. In fact, applied ethics has, to a significant degree, emancipated itself from classical ethics. This has led to the accusation that applied ethics is too “unscientific” and not “theoretical” enough. 1 I restrict my focus on bioethics for reasons of simplicity. 2 This case is taken from Rauprich (2011). I will discuss it in more detail below.

SPECIFYING SPECIFICATION

Embed Size (px)

Citation preview

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

1

SPECIFYING SPECIFICATION

Norbert Paulo

As late as 1984, five years after the first edition of the seminal Principles of Biomedical Ethics

appeared, Tom Beauchamp lamented that applied ethics is not taken seriously as a distinct field of

philosophy. In order to change that attitude he argued for effacing the distinction between applied and

classical ethics. After all, philosophers of applied ethics do the same as all other philosophers: they

analyze concepts, use certain strategies to test or justify beliefs, and explicate hidden premises in

arguments (Beauchamp 1984).

Since 1984, applied ethics has grown progressively within philosophy. This is especially true

of bioethics, which is viewed as the initial field of more practical ethical accounts. New medical

treatments emerged in the 1960s, posing questions that were hard to answer with the abstract classical

ethical theories like Kantianism and utilitarianism.1 Just consider the case of a newborn diagnosed

with the Edwards syndrome. Approximately 50% of infants suffering from this chromosomal disorder

die within 2 months, merely 10% survive 1 year or longer. In this case, further complications made

oral nutrition impossible. The parents requested that the physician withhold all available life support

measures for their child. Should the physician respect their wish?2 It is very hard to answer such hard

questions with very abstract principles such as the Kantian categorical imperative or the utility

principle. Starting from such abstract principles, applied ethicists developed more concrete, context-

sensitive principles, such as the four principles nowadays routinely invoked in bioethics (respect for

autonomy, nonmaleficence, beneficence, and justice); these mid-level principles are meant to be more

suitable to deal with particular ethical problems.

By now, we face nearly the opposite problem from Beauchamp’s almost 30 years ago. Today,

we question the relevance of classical ethics in resolving ethical problems in particular cases. In fact,

applied ethics has, to a significant degree, emancipated itself from classical ethics. This has led to the

accusation that applied ethics is too “unscientific” and not “theoretical” enough.

1 I restrict my focus on bioethics for reasons of simplicity.

2 This case is taken from Rauprich (2011). I will discuss it in more detail below.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

2

My conviction is that classical ethics and applied ethics are tied together in the sense that

applied ethics is an occurrence and development of classical ethics, which accommodates the

complexities of certain areas of life—such as the biological, sociological, and psychological aspects of

medicine—and specifies the rules and principles in a transparent way. On this view, classical and

applied ethics are only different points of abstraction on the same scale (cf. MacIntyre 1984). There is

however not enough space or time to sufficiently argue this view. I will instead focus on one important

aspect, which at least supports my conviction. There are two main reasons for deeming applied ethics

as “unscientific” and not sufficiently “theoretical”: either it lacks a sufficient foundation or it lacks a

transparent connection between its abstract norms and the more specific norms or resolutions in

particular cases. I focus on the connection element. We expect this connection to have at least some

form of stability and continuity. It must be transparent how different norms relate to one another, how

they are developed, and how they bear on judgments in concrete cases. The aim of stability within

ethical theories is to connect the more applied parts with the abstract parts based on classical ethics.

The idea is to have rational connections from the abstract founding principles down to the resolution

of a particular case, such as the case of the newborn with Edwards syndrome. The case resolution

should have a demonstrable connection to the theory’s most abstract norms.

The paper is organized in two main parts. The first part introduces and discusses a highly

acclaimed method to guarantee stability in ethical theories: Henry Richardson’s specification. A

detailed scrutiny of this method leads to the second part, where I outline the comparable method of

deduction as used in legal theory to inform the debate around stability from that point of view. The

view from legal theory supports Richardson’s claims for what ethical theories need to reach stability,

but it also makes clear that specification’s impact on stability is, in fact, very limited. I suggest that,

once one sufficiently specifies specification, it appears astonishingly similar to deduction. Legal

theory also provides valuable insight into the functional range of deduction and its relation to other

forms of reasoning. This leads to a richer understanding of stability in normative theories and to a

smart division of labor between deduction and other forms of reasoning. The comparison to legal

theory thereby provides a framework for how different methods, such as specification, deduction,

balancing, and analogy, relate to one another. Such a framework is necessary to get a clearer picture of

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

3

how these methods are “intertwined and overlapping” (Childress 2007, 29) and to overcome the

merely metaphorical level on which most of the current literature on methods in ethics has become

deadlocked.

PART I: STABILITY AND SPECIFICATION

In this first main part of the paper I shall use the example of the method of specification as

introduced by Henry Richardson to show how seriously theorists in applied ethics take the need to

connect a theory’s more applied parts with its abstract parts. Specification is especially pertinent,

because it is meant to reach stability with logically non-absolute norms; i.e. with norms that do not

hold in each and every case but are open for exceptions and even revisions. Thus by design it works

with flexible ethical theories.

After Richardson’s initial 1990 paper the notion of specification attracted quite some interest

in the bioethics literature. David DeGrazia coined the term “specified principlism” to combine

principlism3 with specification and regards specification as “the most significant contribution to our

understanding of bioethical theory in some time” (DeGrazia 1992, 524). Beginning with the 4th edition

of their Principles of Biomedical Ethics, Tom Beauchamp and James Childress make use of

specification.4 Carson Strong (2000) and Bernard Gert, Charles Culver, and Danner Clouser (2000)

most notably raised criticism against specification. But what is it that captured so much attention? As

Beauchamp put it:

Specifying norms is achieved by narrowing their scope, not by interpreting the meaning of terms

in the general norms (such as ‘autonomy’). The scope is narrowed … by ‘spelling out where,

when, why, how, by what means, to whom, or by whom the action is to be done or avoided.’ A

definition of ‘respect for autonomy’ (as, say, ‘allowing competent persons to exercise their liberty

3 The term ‘principlism’ designates the bioethical theory of Tom L. Beauchamp and James F. Childress.

4 Not following Richardson’s skepticism, Beauchamp and Childress stick to the use of balancing besides

specification: “Balancing seems particularly well suited for reaching judgments in particular cases, whereas

specification seems especially useful for developing more specific policies from already accepted general

norms” (2013, 20, their italics).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

4

rights’) clarifies the meaning of a moral notion, but it does not narrow the scope of the norm or

render it more specific and practical. The definition is therefore not a specification (2011, 301

ff.).5

Thus, specification is not about the meaning of the norm’s terms. It is rather about the step-by-

step limitation of the norm’s scope of application. Richardson claimed that “the complexity of the

moral phenomena always outruns our ability to capture them in general norms” (1990, 295), which is

why he takes moral norms to be non-absolute and always open for future revision. He regards,

somewhat surprisingly, the non-absoluteness of norms as necessary to guarantee stability in ethical

theories (1990, 300 ff.; cf. DeGrazia 1992, 527).6 This focus on non-absolute norms has, according to

Richardson (1990 and 2000), some profound implications for the methods we can use to bring norms

to bear on particular problems or cases. If we cannot capture moral phenomena in absolute norms,

then, he argues, we cannot use deduction as a method to apply norms to cases, because deduction

requires the existence of an absolute norm. At the same time he finds the method of balancing—which

theorists frequently use to apply non-absolute norms—way too intuitive and arbitrary in order to

rationally combine abstract norms and concrete cases; it cannot guarantee stability. It is important to

understand that the idea of stability in ethical theories seems to be at the heart of specification and a

main motivation for Richardson to develop the method in the first place. In his initial 1990 paper he

explicitly aimed at developing

a schema of what it would be to bring norms to bear on a case so as to indicate clearly what

ought to be done. The deductive application of rules to cases and the intuitive weighing of 5 In a single-authored article by Childress (2007), the notions of applicability, interpretation, and specification

seem to be bound together in the sense that specification is an interpretation that narrows the scope of a principle

which is the same as “restricting the range or scope of a principle’s applicability.” But still they are distinguished

from application. Unfortunately Childress does not explain in detail how specification works. It is interesting that

he—contrary to Beauchamp—describes specification as determining the meaning of a principle by restricting its

range and scope of application. In their co-authored book (2013, 17 ff.) they take Beauchamp’s line and point

out that specification has nothing to do with interpretation (which is in their understanding, roughly, analyzing

and determining meaning).

6 I come back to that necessity claim in a later section.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

5

considerations [balancing] are the two cognitive operations usually thought central to this task.

I seek to add specification as a third, even more important operation (Richardson 1990, 280).

Looking back some years later he wrote:

It was to the challenge of continuity or stability, above all, that my model of specifying norms

was addressed. It helps answer the question: How is it that a norm is being brought to bear on

some particular problem even though the interpretation of that norm shifts? (1995, 130, my

emphasis).

Stability in this sense is basically what I called the connection between the abstract norms of an ethical

theory and its more concrete norms (and the particular case resolutions). It is the idea of ensuring that

the more concrete norms or the case resolution are still faithful to the abstract norms and reflect their

ideas. Stability is what we ask for when we seek “assurance that the commitment that underlay the

initial norm is being appropriately honored” in the more concrete parts (Richardson 1990, 292).

Specification is meant to provide criteria as to how we can reach stability in ethical theories.

Specification Defined

So let us have a look at these criteria; Richardson’s definition of specification is this:

Norm p is a specification of norm q (or: p specifies q) if and only if

(a) norms p and q are of the same normative type [end, permission, requirement, or prohibition];

(b) every possible instance of the absolute counterpart of p would count as an instance of the

absolute counterpart of q (in other words, any act that satisfies p’s absolute counterpart also

satisfies q’s absolute counterpart);

(c) p specifies q by substantive means … by adding clauses indicating what, where, when, why,

how, by what means, by whom, or to whom the action is to be, is not to be, or may be done or

the action is to be described, or the end is to be pursued or conceived; and

(d) none of these added clauses in p is irrelevant for q (1990, 295 f., footnote omitted).

In Richardson’s explanation of these criteria conditions (b) and (c) seem to be of special importance:

(b) is supposed to mean that every instance of p must be an instance of q, but becomes a bit difficult,

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

6

because we mainly deal with non-absolute norms.7 Condition (c) is the glossing condition and ensures

that p is in fact more precise than q—because of additional content—and not only a subset of q (like

an implication of q in a logical sense). Condition (d) is supposed to exclude glossing by adding

conjunctions (just as (b) is meant to exclude disjunctions). Condition (a) addresses formal

complications for the definition. The core of specification is thus extensional narrowing (b) plus

glossing the determinables (c) (Richardson 2000, 289; 1997, 72 f.). These two conditions do the work

of actually specifying the norm, whereas conditions (a) and (c) are constraints for how to specify.

The definition makes it clear that specification is only a relation between two norms. It offers

some formal criteria to determine whether one norm counts as a specification of another. Neither does

it offer criteria for the justification of any norm, nor enables it to choose between different (specified)

norms that satisfy the formal criteria. Richardson only hints at a discursive justification standard, that

“in effect carries the Rawlsian idea of ‘wide reflective equilibrium’ down to the level of concrete

cases” (1990, 300, footnote omitted).8 Similarly, Beauchamp and Childress use an “integrated model”

(2013, 19, 404), combining wide reflective equilibrium and their common morality, as a means of

justification. I will not go into that standard here because, first, reflective equilibrium is a notoriously

ambiguous notion (cf. Arras 2007)9, and, second, there is no necessary connection between

specification and this very standard; one could as well use specification combined with another

standard—such as utilitarianism, Kantianism, majority vote, flipping a coin, or, as Strong (2000)

7 For reasons of readability and simplicity I omit the problem Richardson faces because his definition is meant to

work for absolute and non-absolute norms. When he adds that condition (b) is about the absolute counterparts of

both, p and q, he uses this fiction for non-absolute norms (see Richardson 1995, 130: “fictional inflexible

versions of norms”), which actually implies that the narrowing condition (b) has its origin in deduction.

Althhough Richardson’s focus is clearly on non-absolute norms, he holds that specification also works

with absolute norms: ”Specification might begin from an absolute norm - and for this reason some instances of

deductive application are also instances of superfluous specification - but it need not” (1990, note 38).

8 For Richardson’s continuing elaboration of a stable pragmatistic ethics see his (1995 and 1997).

9 Beauchamp and Childress even recognize this as one yet unresolved problem of their account of justification

(2013, 423).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

7

suggests, casuistry—for justifying the choice of one possible specification over another. Note also that

the wide reflective equilibrium already takes judgments in particular cases into account. It is unclear

why there is still a need to carry this idea “down to the level of concrete cases.” Let me use Strong’s

critique of specification to stress the difference between the formal structure of specification and the

justification of any particular specified norm. Strong raised two main points:

(1) One must choose between alternative ways of specifying principles, and this choice requires a

prior decision concerning how priorities ought to be assigned to the conflicting ethical principles

in the context of the case in question; and (2) the actual work in the assigning of priorities is not

done by the specification itself, but by some other method or methods, which can include casuistic

reasoning (2000, 327).

The first point addresses a problem that all ethical theories have, namely that the theory itself has to

provide the epistemic framework that enables us to pick the facts that are deemed relevant. The second

point is meant to criticize specification more directly. What is meant to be a criticism of specification

is only a restatement of the limits of specification as Richardson defined it, namely as a formal relation

between two norms. Rather than as a critique, Strong’s critique should be understood as doubts

whether wide reflective equilibrium is a good standard for the justification of certain specifications of

a particular initial norm. As a casuist, Strong would favor casuistry; utilitarians would endorse

utilitarianism. But these standards of justification are not to be mixed up with specification itself. They

are not rivals of specification; they complement it.

An example will help to clarify the points being made. Consider again the case of the newborn

with Edwards syndrome which goes back to one of Richardson’s illustrations of specification:

A … newborn was diagnosed to have trisomy 18 syndrome … The prognosis of infants with this

disease is poor; approximately 50% of them die within 2 months, only 10% survive 1 year or

longer … The disease causes severe mental retardation and developmental disorders. In this case,

the infant had oesophageal atresia … such that oral nutrition is not possible. In addition, the infant

was suspected of having a ventricular septal defect of the heart. The atresia could be corrected by

surgery. The heart defect probably could also be corrected surgically when the infant is older,

provided that she survives to that age. However, the parents requested to withhold all available life

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

8

support measures for their child, including nutrition, hydration and surgical correction of the

atresia. Should their wish be respected? (Rauprich 2011, 593)10

In his discussion of the case—working within principlism—Oliver Rauprich regards three facts as

morally relevant, namely limited life expectancy, reduced expected quality of life of the infant, and the

parents’ request to decline treatment. He then connects these facts to two principles, the best-interest

principle and the principle of respecting parental decision-making authority:

How … the morally relevant facts connect to the general norms … must be shown by specification

… To start with the best-interest principle, a reasonable specification may be:

1a: Respect the principle of beneficence by treating incompetent patients with no discernible

preferences according to their best interest.

1b: …by treating incompetent patients in a way that provides them with the best balance of

expected benefits over burdens.

1c: …by providing aggressive treatment to severely ill newborns only when they have long-term

life expectancy and the capacity to develop self-consciousness.

1d: …by not providing surgery, nutrition and hydration for newborns with trisomy 18 syndrome

and oesophageal atresia.

According to this line of specification, it is not in the best interest of the infant to have aggressive

treatment because it would cause pain and other harm without providing a significant chance for

long-term survival and development of selfconsciousness (Rauprich 2011, 593 f.).

In this quote, Rauprich offers norms with a narrowed scope and some glossing. But one gets no

information about where these specifications come from and why they are more coherent (and thus

justified) than other possible specifications. As I have already highlighted, specification only provides

a way to test the formal relation between two norms that is dependent on an external justification. It is

10 I use Rauprich’s treatment of specification here because Richardson’s own examples are less precise.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

9

not the responsibility of specification to justify a certain specification. In what follows, it will become

clear how demanding the use of specification actually is, despite its explicitly limited range.11

Specification and Interpretation

To apply (b), specification’s narrowing condition, in cases of doubt12, one must know all

instances of the specified norm and in the end all instances of the initial norm, as well. To use the

intension/extension terminology13, (b) requires knowing all possible instances of both norms

11 Some authors who, like Tom Tomlinson (2012, 64), see the distinction between specification as a formal

relation and its need to be complemented by a theory of justification (like reflective equilibrium) underestimate

the problems I am concerned with in this section.

12 This limitation is due to an objection that Richardson, in personal conversation, raised against my

understanding of specification, namely that there are cases where it suffices to have an ordinary understanding of

the English language. Consider this example: when the initial norm is “Each working mother ought to take one

of her children to work on Take Your Child To Work Day” we do not need to know all instances of this norm to

know that “Each working mother ought to take one of her children to work on Take Your Child To Work Day,

giving preference to a daughter” is a specification of the initial norm; it suffices to know that “daughter” is

within the scope of “children.” This is true, but only for obvious cases for which we do not need specification in

order to guarantee stability. We need means like specification in cases of doubt; almost all problems in ethics are

of the latter kind. Being a competent speaker of English does, for instance, not tell you whether the “best

interest” boils down to “not providing surgery, nutrition and hydration for newborns with trisomy 18 syndrome”

in Rauprich’s case.

13 The extension of a term is roughly understood as the designation of things the term applies or extends to. The

intension of the term, on the other hand, is roughly understood as its definition, as the naming of all necessary

conditions. One could say that intension is meaning in the ordinary sense. For example, the intension of “ship” is

something like “a vehicle for transport on water.” The extension is sailing ships, passenger ships, fishing ships,

etc. To tell that the spaceship Enterprise is not in the open extension of “ship” one either needs to know the

extension of “ship” or to apply the intension to determine the extension. The Enterprise clearly is no vehicle for

transportation on water. Thus the intension I gave for “ship” narrowed the possible extension. The usual way of

interpreting terms or sentences is to use the intension to part the actual from the only possible extension.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

10

(extension). That will, in most hard cases, also require the knowledge of the intension of both norms.

The glossing condition (c) requires that the specified norm is more precise than the initial norm. I

suppose it is in many cases hard to tell what is only a special case of the initial norm in this sense and

what is a glossing (i.e., adding content). This condition, too, demands quite some knowledge about the

extension of the initial norm. The point is that the use of specification depends on interpretations of the

norms at hand. An interpretation is the choice of one possible extension; the interpretation of a norm

that has been made by someone—such as the ethical norms developed and explicated by Beauchamp

and Childress—has to take into account the linguistic conventions as well as the author’s intentions

(cf. Raz 2010).

Consider again Rauprich’s specification of the best-interest principle. I feel simply unable to tell

whether this specification really fulfills Richardson’s criteria for specification. To be in a relation of

specification, both norms have to be of the same normative type. Neither of Rauprich’s initial

principles I can tell whether they are prohibitions, requirements, or some other normative type. This is

a problem regarding Richardson’s condition (a). To overcome this problem, one simply needs to give a

full formulation of both the initial and the specified norm (or at least a formulation of the normative

type). Rauprich only named the principle without providing its content. What the principle actually

says depends on the respective ethical theory one works with; specification starts from a given norm.

Rauprich states the best interest principle as a requirement and with certain content only in the first

step of its specification (norm (1 b)). We do not get to know the normative type and the content of the

principle itself—we only see type and content of the first specified norm. Condition (b) requires

knowing all possible instances (extension) of both norms, probably by interpretative support of the

norms’ intensions. How else is one supposed to be certain that the requirement not to provide surgery,

nutrition and hydration for newborns with trisomy 18 syndrome is, as Rauprich suggests, an instance

of the best interest principle?14

One might argue that my demandingness claim is misleading, because specification usually

proceeds by suggesting typical instances of the initial norm, and does not require certainty about all

14 This case raises no particular problems for conditions (c) and (d).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

11

instances of the initial and the specified norms.15 But this argument is on another level than the

demandingness claim. I do not doubt that, in actually specifying a norm, we usually proceed by

suggesting typical instances of that norm. Rauprich’s suggestion that the best interest principle has to

do with the “best balance of expected benefits over burdens” can be seen as a typical instance.

Specification is on another level; it has not so much to do with guiding decisions but with the ex post

rationalization of decisions that have already been made. It might be prudent not to ask for all

instances of the initial norm and all its probable specifications when deliberating about what to do

under immense pressure of time in an emergency room. The actual decision-making process will often

be somewhat messy; but ex post analyses, such as specification, allow for an organization of afore

unorganized thoughts and ways of reasoning. Such analyses can enhance our ability to use metaphors

such as balancing and specification as guides. In other words, there is a relation between the actual

decision-making and the ex post rationalization; to focus on the latter is a first step to understanding

and ultimately improving the former. As the definition of specification makes very clear, it does not

guide you in making a decision, but it gives you means to test the accuracy of the norm you have

chosen as a specification of the initial norm. Specification is in this respect similar to the categorical

imperative. Both start from given norms and merely offer an ex post test for these norms. Neither

specification nor the categorical imperative determine which norms you feel suitable to be tested. On

this level of rationalization it does not suffice to rely on typical instances. Specification’s narrowing

condition asks you to test whether every possible instance of the absolute counterpart of your specified

norm would count as an instance of the absolute counterpart of the initial norm. You will not be able to

tell whether this condition is met unless you have interpreted both norms. But again, having gone

through a series of successful applications of specification (or the categorical imperative) might

enhance your sense for likely candidates that will pass the test of specification (or of the categorical

imperative).

To sum this up, in order to test whether one norm is a specification of another norm one needs to

know at least four things: the precise wording of the initial norm, the precise wording of the other

15 I am thankful to an anonymous reviewer for raising this point.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

12

(specified) norm, an interpretation of the initial norm (its extension and in most cases its intension),

and an interpretation of the other (specified) norm.

It becomes more and more evident that specification is closely tied to semantic

interpretation.16 The narrowing and the glossing condition of specification are both semantic

conditions. Both limit the variety of possible specifications of a certain norm through a semantic test. I

shall note that the interpretation of a norm is different from the justification of which specification to

pick. The interpretation takes part in the process of testing possible norms for their compliance with

specification’s four criteria. A justification is then needed to choose between the norms that passed the

test. This justification is an extra step; it is not part of specification itself. To point out the importance

of semantic interpretation is not so much a critique of specification as defined by Richardson. Rather,

as H.L.A. Hart made very clear, every formal relation between two norms (like deduction, balancing,

and analogy) has this problem:

Whichever device … is chosen for the communication of standards of behaviour, these … will, at

some point …, prove indeterminate; they will have what has been termed an open texture… [I]n

the case of legislation, as a general feature of human language, uncertainty at the borderline is the

price to be paid for the use of general classifying terms in any form of communication … It is,

however, important to appreciate why, apart from this dependence on language as it usually is …

we should not cherish, even as an ideal, the conception of a rule so detailed that the question

16 I shall note that Richardson himself has a somewhat unusual understanding of interpretation. According to

him, to interpret a norm is to modify the norm by adding content. He distinguishes interpretation in this sense

from derivation such that the latter merely links a norm “to a conclusion by causal (or conceptual) facts … These

links supplement the initial norm without changing it“ (2000, 288 f.). For Richardson, the divergence between

interpretation and derivation is also related to the logical form of the initial norm (absolute for derivation, non-

absolute for interpretation). Derivation is meant to be the form of reasoning used for, inter alia, deduction,

whereas interpretation in this sense is the generic term that includes specification. Richardson (2000, 289 f.)

further differentiated between four kinds of interpretation in his understanding of modifying norms: specification

(extensional narrowing plus glossing), extensional narrowing (without glossing), glossing (without narrowing),

and sharpening (of vague norms).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

13

whether it applied or not to a particular case was always settled in advance, and never involved, at

the point of actual application, a fresh choice between open alternatives … If the world in which

we live were characterised only by a finite number of features, and these together with all the

modes in which they combine were known to us, then provision could be made in advance for

every possibility. We could make rules, the application of which to particular cases never called

for a further choice. Everything could be known, and for everything, since it could be known,

something could be done and specified in advance by rule. This would be a world fit for

‘mechanical’ jurisprudence (Hart 1994, 127 f., his italics).

But, of course, this is not the world we live in. Different interpretations of a norm will very likely yield

different outcomes of the specification test.17 Just imagine that Rauprich had interpreted the following

norm as being outside the scope of the best interest principle: “Treat incompetent patients with no

discernable preferences according to their best interest by providing aggressive treatment to severely

ill newborns only when they have long-term life expectancy and the capacity to develop self-

consciousness” (his specification 1c). He might do this, for example, because he believes that it is

always in the best interest of any human being to live as long as possible, independent of the capacity

to develop self-consciousness. With this interpretation he would have had to shift the line of

specification, and he would have concluded that it is indeed in the best interest of the infant to have

aggressive treatment, although this would probably cause pain and other harm without providing a

significant chance for long-term survival and development of self-consciousness. The problem of

interpretation is thus also a key problem for stability in ethical theories.

The Necessity Claim

Richardson explicitly set another limitation for specification, and thus for stable ethical theories.

Above we already came across his claim that our moral norms are in fact non-absolute and open for

revision, i.e. that they are not universally quantified, but start with something like “generally

17 In personal conversation Richardson made clear that he does not have a certain theory of interpretation in

mind. He generally thinks that many forms of interpretation are possible in ethics; and he acknowledges that not

being settled on the issue of interpretation affects the applicability of specification.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

14

speaking,” instead of “for all” like absolute norms. But Richardson does not only make that factual

claim. Surprisingly, he further argues that stability in ethical theories requires initial norms in non-

absolute form. Let me call this his necessity claim. According to this claim norms need to be non-

absolute, because only such norms allow for either kind of development without generating instability:

First, that the specified norm replaces the initial norm, what Richardson calls a true revision of the set

of norms. Second, that the specified norm stands alongside with the initial norm, what he calls an

expansion of the set of norms (1990, 292).18 In the case of the newborn with Edwards syndrome,

Rauprich’s specification proceeded by expansion in this sense. Every step of specification added

something to the existing norm; his norm (1a) “Respect the principle of beneficence by treating

incompetent patients with no discernible preferences according to their best interest” becomes more

specific by expansion: (1b) “Respect the principle of beneficence by treating incompetent patients with

no discernible preferences according to their best interest by treating incompetent patients in a way

that provides them with the best balance of expected benefits over burdens.” A true revision of (1a)

through (1b) would extinguish (1a) from the normative set. True revisions are much more determining,

because they significantly limit the options for future case resolutions; norm (1a) simply does not exist

anymore; only (1b) remains.

Richardson’s argument for the necessity claim is that true revisions are only possible with

non-absolute norms, since for absolute norms the “result would be an implied exception that would be

logically incompatible with the initial norm’s universal command, making it difficult to see any

stability”. Development in the sense of expansion is possible with absolute norms, but it would be

unnecessary – just like a specification of absolute norms is possible but “superfluous,” because, first,

referring to condition (c) of the definition of specification, derived norms were already implied in the

initial norm; and second, referring to condition (b), the absolute counterpart of such a norm would be

the norm itself. Thus, specification also works with absolute norms, but only in a limited sense that

excludes true revisions. In order to allow for either kind of development one needs non-absolute

norms, or so he claims (Richardson 1990, 292 ff.).

18 Note again that this is not the same difference as Richardson’s between “interpretation” and derivation (2000,

288 f.). Revision and expansion are instances of specification.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

15

This seems to be the only difference between absolute and non-absolute norms. Since

Richardson put so much emphasis on the distinction between norms of these two logical forms he

must have had an important point in mind. The only point I see is the importance of true revisions. But

how do these revisions fit within the specification relation in contrast to deduction? There is no

straightforward relation between revisions and either specification or deduction. Specification leads to

more concrete norms, but without itself providing reasons to abandon the initial norm in favor of the

specified norm. It might turn out that the specified norm should replace the initial norm, but this

judgment would depend on a form of reasoning different from specification. A similar point holds for

deduction: Richardson is right that an exception to the initial norm that is logically implied by this

very norm would be logically incompatible with the norm’s universal command. Just like

specification, deduction does not help in revising the norm under consideration. Both work with given

norms and have no internal means to alter these given norms. This does not mean that there is no way

to modify a set of absolute norms.

In the following second part I will explain how norm revisions work in legal theory and

thereby uncover the wrong picture behind the necessity claim. I will argue that all revisions impair a

normative theory’s stability to a greater degree than do expansions—no matter whether the revised

norms are absolute or non-absolute—but that neither the revision of absolute nor of non-absolute

norms necessarily generates instability. This yields the conclusion that the necessity claim is wrong

and thus no limitation for specification.

PART II: A VIEW FROM LEGAL THEORY

Stability is not only important in ethics, but also in other kinds of normative theories, especially

in law. In this part I will employ concepts from legal theory to inform the understanding of

specification and to further clarify its relation to deduction and other forms of reasoning. I will

ultimately conclude that, once specification is sufficiently specified, it appears very similar to

deduction and should be abandoned by the latter as the central element of a framework of different

methods to be used in ethics as well as in law.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

16

In democratic legal systems of checks and balances powers are divided between legislature,

administration, and judiciary, which means that administration and judiciary are generally bound by

what the legislature enacts. The debate on legal methods shows how these boundaries work. The best

way to bind the judiciary to the laws enacted by the legislature is by deduction. In logical terms,

deductive inference guarantees the truth of the conclusion if the premises are true.

Above I already distinguished between making a decision in a particular case and testing that

decision against the background of a certain normative theory. The decision-making can follow

various and mixed reasonings. Methods like specification and deduction do not help a lot with that.

They come into play in testing a certain decision, no matter how one actually made it. That the

decision is bound to what the legislature enacted can best be tested via a reconstruction of the decision

in a deductive form. The advantage of a deductive reconstruction lies primarily in its transparency. In

a side remark about law Richardson notes,

[a]ll operating legal systems have accepted severe limitations on treating law as a deductive

system, and have instead developed case-oriented and precedent-bound approaches that make

room for equity, as described by Aristotle … and as familiar in English common law, namely,

scope for the judge to modify the rules to fit the case at hand. Given the role of law in public

legitimation of the state and in grounding stable expectations for commerce and society, there is

every reason to strive for a rule-bound, deductive approach to adjudication. Since even the law is

forced to give up on the pure deductive ideal, it is hardly likely that ethics, where the motivations

for deductive transparency are much weaker, could succeed in living up to it (1990, 287, footnote

omitted).

I find it hard to see a relevant difference between law and ethics in these respects. Of course, in ethics

there is nothing like the institutionalized separation of powers; it might be easier to justify the revision

or extension of given norms. But what exactly is it that outruns deductive transparency as an ideal in

ethics? In the first part I already argued that specification is very demanding; in fact, it does not seem

to require less knowledge than deduction. Furthermore, the need to interpret norms already seems to

allow for some flexibility.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

17

Regarding the first part of the quote, it is not clear to me what Richardson has in mind. As far

as I see, precedents and the case-based approach in general, as opposed to the rule-based legal

tradition in continental Europe, have not so much to do with the modification of (given) rules, but

rather with the generation of rules.19 Since the binding force of precedents and statutes is not the same,

I take Richardson to be talking about the possibility to distinguish or to overrule a precedent because

the previous precedent is for some reason believed not to be an appropriate rule for the case at hand.20

There is a vast literature on when and how judges can and should overrule or distinguish precedents. I

will say more about that when I come to the development of legal systems. What troubles me here is

something else: Richardson makes the remark about precedents in order to argue against the reliance

on deduction as a method. He seems to suggest that deduction, in contrast to specification, does not

allow for the modification of given rules without creating instability. My claim is that deduction and

specification do not differ in that respect.

To pave the way for my argument I shall now clarify the understanding of deductive reasoning

in legal theory by briefly outlining Robert Alexy’s model for legal adjudication (cf. Alexy 1989).

Deduction does not have a good reputation among ethicists. This reputation is largely due to the

misunderstanding that, when using deduction, one simply has to ‘discover’ the implications of a given

norm without using any creativity. The idea seems to be that every deduction is as simple as this well-

known syllogism, P 1: All men are mortal. P 2: Socrates is a man. C: Therefore: Socrates is mortal.

That deduction is this simple is both true and false. It is true with respect to the underlying structure,

namely that the truth of the premises guarantees the truth of the conclusion. However, it is false in that

it is oftentimes not so simple to infer a conclusion deductively, because a premise like P 2 in the

example is not readily available. According to Alexy, a deduction requires at least the following: (1) a

universal and conditioned norm (i.e., a norm that is logically universal—all-quantified or absolute—

19 The main difference is that in Case Law systems the courts make the law, whereas in rule-based (Civil Law)

systems the law is made by the parliament. This difference, of course, affects the way in which the law can be

modified; I will take up that problem in turn.

20 Distinguishing is the demonstration that the facts of the case at hand differ in a relevant aspect from the facts

of a certain precedent so that the precedent should not apply.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

18

but stated in an “if… then” clause), (2) a case description, and (3) a semantic interpretation of (1) to

bridge the gap between (1) and (2). The relation between (1), (2), (3) and the conclusion is a normal

deductive inference, which means that to accept the truth of the premises forces one logically to accept

the truth of the conclusion. This is the strongest rational force one can hope for. This simple deductive

model is meant to reach transparency—because it forces disclosure of all three premises, which are

thereby open for critique—and stability (for it formally binds norm and conclusion together). The

three formal steps are in Alexy’s terminology the “internal justification.” The crucial justification of

the premises is the “external justification.”21 Thus, the external justifications justify the premises of the

internal justifications; the external justifications can use different kinds of arguments as well as

different kinds of normative background theories. So far, this is pretty similar to the difference

between the formal relation and the justification highlighted earlier for Richardson’s specification. The

internal justification offers—just like specification—a formal relation between two norms.

A difference between specification and deduction is that only the latter has strong justificatory

power for the respective conclusion. Also only the deductive model can deal with universally qualified

norms (normal application) as well as with norms that need further development (e.g. through

supplementation or gap-filling). Such developments are necessary to complete the pure application

model. These possibilities within and necessities for legal systems are all too often overlooked,

especially when something is blamed as being ‘legalistic.’ It is crucial to understand that neither the

use of deduction, nor the comparison between deduction and specification determine whether moral

norms are in fact absolute or non-absolute. True, non-absolute norms cannot function as the basic

premise in a deductive inference; but one can develop these norms (in the various ways outlined

below) such that they become absolute. These norms are not the starting point but the result of various

forms of reasoning that started from a non-absolute norm.

Application and Development of Normative Systems

21 Employing the same concepts, MacCormick (1978) speaks about “first-order” and “second-order

justification,” Koch and Rüßmann (1982), speak about “Hauptschema” and “Nebenschema.”

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

19

In the section on specification and interpretation I highlighted the importance of interpretation

for the stability of ethical theories. Another crucial aspect for stability is the relation between

application and development of normative systems. Application works with given norms and leaves

them unmodified. The method for application I suggest here is deductive. As already pointed out, the

premises in the deductive model are in need of interpretation in the sense of determining the meaning

of norms. The development of a set of norms is also always dependent on interpretations, but in the

sense that to realize the very need for development depends on having interpreted a norm; some

reasoning based on this interpretation must have led to the conclusion that a development is necessary.

Contrary to applications, developments modify the normative system by revising a norm or by adding

new norms (e.g. through supplementation and gap-filling), thereby expanding the system. The aim of

further developing the law is to allow for deductive applications (i.e., internal justifications) where the

existing norms do not. Thus, deduction is not used to interpret or develop. Rather, deductive reasoning

needs interpretation and, oftentimes, the development of norms to be feasible.

The distinction between application and development as well as the role of interpretation are

not sufficiently clear in specification. In fact, the application of a norm to a particular case is not even

a distinct problem for Richardson once the norms are specified:

The central assertion of the model of specification is that specifying our norms is the most

important aspect of resolving concrete ethical problems, so that once our norms are adequately

specified for a given context, it will be sufficiently obvious what ought to be done. That is,

without further deliberative work, simple inspection of the specified norms will often indicate

which option should be chosen (1990, 294).22

Richardson believes that specification potentially does play all roles, the resolution of particular cases

(at least together with “inspection” or “perception”) and the development of the normative system

through revisions and expansions. In the section on the necessity claim, I argued that there is no

22 In a footnote to this he refers to Aristotle as holding that in applying thus specified norms it is “’perception’

that must supply the ‘premise’ that a currently possible action satisfies the norm.” It is certainly true that norms

are usually easier to apply the more specific they are, but that alone does not render the distinction between

application and development irrelevant.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

20

straightforward relationship between development and specification. Specification does lead to more

concrete norms, but it does not provide reasons to abandon the initial norm in favor of the specified

norm or to leave the initial norm intact. It thus does not constrain reasoning in all of the above

mentioned situations. Beyond that point, given the far-reaching parallels between specification and

deduction discovered so far, I find it improbable that specification—unlike deduction—does all the

work necessary for the other distinct problems, let alone the further distinctions that can be drawn.

Development by Adding Norms (Expansion)

I already distinguished between application and developments. I shall now distinguish two

instances of development,23 namely supplementation and gap-filling. Supplementation is a form of

development that depends on the type of the initial norms, for it calls on the applicant24 to turn them

into an absolute norm in order to be deductively applicable. The most common form of

supplementation is the use of discretionary provisions. Such laws require a supplementation from the

applicant, e.g. from the administration or the court. Discretion is meant to allow (and demand) for a

supplementation of the initial norm that a legislator intentionally left incomplete, i.e. the legislator

deliberately created a norm that needs supplementation.25 Familiar are discretionary provisions like “If

A does x, a fine can be imposed.” Such a permission to impose a fine calls on the applicant to

supplement the norm by adding further criteria for the use of the permission (that is, for when a fine

shall be imposed), thus creating a new norm, which stands alongside the initial norm.26 Another

23 I am only drawing a broad picture here. The outline is not meant to be exhaustive.

24 My use of the term “applicant” here and in the following is of course not meant to level the distinction just

drawn. It would be more precise to speak of the developer, however, this sounds so weird that I prefer to stick to

the prima facie misleading term. Behind that use is the thought that the applicant often has to develop the norms

in order to make them applicable.

25 Note that this understanding of discretion differs from the ordinary understanding of discretion as—roughly—

the power to decide between alternatives.

26 Of course, no applicant is absolutely free to supplement; one is bound by the scope of the initial norm and

substantially by other norms of the system, and, in law, especially by the notion of proportionality. The latter

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

21

example of supplementation, besides discretion, is the conflict of principles—understood as a certain

form of prima facie norms (‘optimization requirements’) that are very similar to the principles in

principlism. When such principles conflict, one principle prevails without rendering the other

unlawful. The supplementation is then—similar to the use of discretionary provisions—the

development of criteria for when the one principle prevails over the other (see Alexy 2002, chapter 3;

Koch and Rüßmann 1982, 244 f.). I hold these kinds of supplementation to be functionally equivalent

to specification in the form of expansion in Richardson’s terminology. Both, discretionary provisions

and principles are types of norms that cannot themselves be used as a premise in a deductive argument

yielding a definite conclusion. The difference is, of course, that they are meant to be supplemented in

order to be used as such a premise, i.e. for the internal justification.

Another instance of development by adding new norms is gap-filling. Here, too, the new norm

stands alongside the initial norm without modifying it. Roughly two situations allow for gap-filling:

First, when the interpretation of the given norms reveals that the law does not cover the particular case

(law-immanent development) and, second, when it reveals that the law does not represent what the

legislature wanted to enact (law-exceeding development). To speak of a gap in the law presupposes a

certain area of human conduct that the law sufficiently covers, as is criminal conduct by the criminal

law. This is meant to exclude conduct that the law will not cover, such as large parts of private family

life or table manners. Whether or not the law has a gap depends on the law itself, on the legislature’s

ends and intentions in creating that law, and on its plans. A gap is therefore an incompleteness contrary

to the legislature’s plans (see Larenz 1991, 373). The same holds for ethics. One can only speak of a

gap within an ethical theory when that theory is meant to cover a certain conduct, but does not so far.

Thus, there is no gap when intentionally no one regulated something, or no one regulated it in another

way.

Law-immanent development is a development within the given system. It aims at adding

norms that fit the case at hand that is so far unintentionally not regulated. The primary method to requires that the norm has a legitimate end, uses suitable means to achieve this end, that these means are the least

intrusive to achieve the end (necessity), and that the means are proportionate in the narrow sense (balancing); see

Klatt and Meister (2012) for a detailed account of proportionality as used in legal theory.

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

22

bridge that kind of gap is the use of analogy to relevantly similar problems that are already regulated

within the normative system (see Larenz 1991, 381 ff.). To guarantee stability, this new norm should

fit the legislative intention to the extent that it is known.27 In contrast, the development is law-

exceeding when it is “extra legem,” but “intra ius,” i.e. beyond the scope of the given law of the time,

but within the general ideas, concepts, and principles of the very legal system. The need for such a

development can, for instance, occur when new technologies (with its possibilities and dangers) come

up that were simply unknown to the legislature at the time they created the law (see Larenz 1991, 413

f.). Barak offers a rather non-technical example:

Consider a will naming Richard and Linda as the heirs, where Richard and Linda are the

testator’s son and daughter. After the making of the will, but before the death of the testator, a

third child, Luke, is born. The facts show that the testator wanted Luke to inherit also, but he

failed to modify the will. Does the will permit Luke to inherit? Interpreting the will cannot make

Luke an heir. The interpretation is not “capable” of “cramming” Luke within the limits of

“Richard and Linda.” We need a non-interpretive doctrine, like the doctrine about filling in a

gap in a will, which can, according to the will, add Luke as an additional heir (2005, 61).

Including Luke into the will is not law-immanent because the will actually regulates the case. But it

regulates it in a way that the testator did not intend. Law-exceeding gap-filling is needed.

Very similar to gap-filling is the possibility in case-law systems to distinguish the case at hand

from a precedent case—or, more precisely, its ratio (i.e., a certain legal rule)—that would be

applicable to the new case. In order to distinguish the two one has to show that some characteristics of

the cases differ in a relevant way. Distinguishing is thus, roughly, the creation of a new legal rule by

narrowing the ratio.28 The effect of distinguishing is that a court does not have to follow a precedent

although this precedent applies to the case at hand. Since—contrary to overruling—every court has the

power to distinguish, this possibility is probably the main reason for the flexibility of the common law.

27 Note that here is a connection to the theory of interpretation one endorses. A theory that does not put much

emphasis on the legislator’s intention would use another rationale.

28 On constraining conditions for this kind of narrowing see Raz (2009, 186).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

23

Courts distinguish from the precedent rule without thereby modifying the precedent. The result is two

norms, the initial precedent and the narrower (distinguished) rule.

The two types of development by adding norms—supplementation and gap-filling—differ in

two main respects: First, supplementation expands the normative system by adding more specific

norms; gap-filling can expand the system by adding norms that can, but do not need to be, more

specific. Second, the legislator intends supplementation, not gap-filling.

In passing I noted that the relevant norms in one form of supplementation, conflicting

principles, are similar to the principles endorsed in principlism. In fact, they do have the very same

structure. They are the basic norms of the respective normative systems, so to speak, and hold prima

facie only. When they conflict, both types of norms must be balanced in order to resolve the conflict

(and the outweighed principle remains intact). The supplementation is then—similar to the use of

discretionary provisions—the development of criteria for when the one principle prevails over the

other. This is more or less the same in law and principlism: The criteria become part of the law or of

principlism by expansion. Not only can we use the new norm that explicitly names the relevant criteria

as an all-quantified (absolute) norm in the deductive model of norm application. We can also use it for

all future cases where the same criteria are fulfilled without repeating the whole balancing procedure.

Balancing is thus a means of resolving particular cases (via deduction) and leads systematically to the

development of the respective normative system.

Development by Revising Norms

Sometimes there is the need to develop a normative system not only by adding norms but by

revising them, i.e. the new norm replaces the initial norm. This, too, is familiar in legal theory—in

case-law as well as in rule-based systems, although the latter have comparatively limited authorization

to “modify the rules to fit the case at hand” (Richardson 1990, 287). The best example for norm

revision is a court’s overruling of a precedent; another is the correction of mistakes.

A revision is a very serious step and needs careful consideration, because it potentially affects the

stability of the normative system even more than its development in the form of adding something,

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

24

leaving the initial norm intact. People might rely on the norm one considers to modify.29 Revisions

(like all developments) are dependent on interpretations. One somehow has to realize the need for the

modification. Interpretation and a form of analogical reasoning must lead to the conclusion that the

particular case at hand falls under an existing precedent30 (otherwise the court could engage in form of

development that adds a norm). Some line of reasoning must then lead to the belief that the existing

precedent, if applied, does not offer a good solution to the case at hand. Further reasoning must lead to

another norm that fits the case—and future cases—better.

The need to revise a norm might also occur when a norm or precedent clearly states something

the legislator or court did not want to enact, i.e. when the legislator or court made a mistake.31

Knowing that the legislator or court actually wanted to state something else might lower the burden for

overruling the existing norm. But still one has to consider the consequences, since the overruling

might affect stability and violate reasonable reliance—and this holds independently of the logical form

of the norm, i.e. whether it is logically absolute or non-absolute.

We are now in a position to expand further on Richardson’s necessity claim, i.e. the claim that

stability in ethical theories requires initial norms that are non-absolute. His argument was that only

non-absolute norms allow for either kind of development, for true revisions and for expansions

29 To make clear that these considerations not only hold for law, but also for ethics, imagine some ethical

guidelines issued by a medical association or some practices in hospitals. Both can lead people to reasonably rely

on these guidelines or practices and, for instance, design their advanced directives in accordance with them. But

note that this distinction is one between revision and expansion, not between absolute and non-absolute norms.

30 I speak here and in the whole section somewhat imprecise about precedents. For philosophically informed

accounts of precedents see Horty (2011), Sunstein (1993), or Brewer (1996).

31 It is hard to draw a sharp line between the correction of mistakes as revision and law-exceeding gap-filling just

outlined as a form of expansion. I suggest that the difference lies in the point in time the “mistake” was made. It

is a revision if the divergence between intention and actual norm occurs at the time the norm is made or enacted;

if the divergence only occurs later (e.g., because of new unforeseen technological innovations or, as in Barak’s

example, because a new child is born) it is law-exceeding gap-filling. The difference is that the former norm was

never “correct” whereas the latter was “correct” but only later became “unfit.”

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

25

without generating instability. I just showed that the two instances of norm revision—overruling and

correction of mistakes—are possible in legal systems. The reasoning behind revisions is neither

deduction nor specification; no formal method generates reasons for revisions. Rather, revisions—like

other forms of development—are necessary to allow for deductive applications (or specifications).

Also, neither form of norm revision generates instability when applied to absolute norms as opposed to

non-absolute norms. On the one hand, revisions do not need to be arbitrary; revisions of absolute as

well as of non-absolute norms are constrained by the respective normative system. On the other hand,

a revision is, by definition, a change in the normative landscape; it alters duties and might frustrate

expectations. But stability does not require that this landscape never changes. In fact, all modes of

norm development are a concession that normative systems are learning systems. Does this speak

against absolute norms and for non-absolute norms? Is there a substantial difference between the

following revisions? Consider the absolute norm “For all persons and situations, do not provide

surgery, nutrition and hydration for newborns with trisomy 18 syndrome” and its revision “For all

persons and situations, do provide surgery, nutrition and hydration for newborns with trisomy 18

syndrome”; contrast this with the non-absolute norm “Generally speaking, do not provide surgery,

nutrition and hydration for newborns with trisomy 18 syndrome” and its revision “Generally speaking,

do provide surgery, nutrition and hydration for newborns with trisomy 18 syndrome.” Does the first

revision—the revision of an absolute norm—really generate instability? To be sure, such a massive

revision would require very good reasons to be rational, but this does not depend on the logical form

of the norm; the revision of the same norm in non-absolute form is equally massive. Both revisions

change the normative landscape and will inevitably frustrate reasonable expectations. But such

changes are necessary in order to keep normative systems alive and learning; and they are possible

with absolute as well as with non-absolute norms. Richardson’s necessity claim is thus wrong.

Concluding Remarks: The Case for Deduction

The last sections showed that the development of norms is not unknown in legal theory. Different

forms exist and are recognized as legitimate and necessary for legal systems. They allow for some

flexibility without sacrificing stability; indeed they are compatible with a deductive method of norm

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

26

application. That these distinctions and concepts are necessary in law implies that they are also

necessary in the even broader and more convoluted field of ethics. The parallel to legal methods

implies that it is very unlikely that specification can do all the work:

The lacking differentiation between the application of a normative system and its development

makes Richardson’s notion of glossing unnecessarily ambiguous. It is meant to play the role of

interpreting the norm as well as the role of developing norms. The differentiation between application

and development makes clear that application is not development, but that development might be

needed to make application possible.

Legal theory shows the functional range of the deductive model and its relation to other forms of

reasoning; the comparison further shows that specification’s range is even more limited. Once the

burden of having to do all the work is taken from methods such as specification and deduction, it

becomes clear that it is not necessary to conceive ethical norms as being non-absolute. Different types

of norms call for different forms of developments, which then allow for their use in deductions. One

can thus stick to the use of absolute norms as a premise in a deductive argument while still including a

variety of non-absolute norms and allowing for flexibility, development, and revisability. Everything

Richardson wanted to allow for with non-absolute norms and specification also works with deduction

and non-absolute or absolute norms. The point is that the logical structure of the norms does not

dictate the use of either specification or deduction. Once specification is sufficiently specified in that

way, its severe problems become apparent. I suggest abandoning specification in favor of deduction.32

I have shown that stability in ethical theories depends on a bundle of features. The core feature is

the importance of interpretation, because all the other features—deduction as a method of application

and all the kinds of developments to allow for deduction—depend in one way or another on the

interpretation of the initial norms. The second part further revealed the embeddedness of deduction

into other modes of reasoning. It is needless to say that the same would hold for other methods of

norm application; none could do all the work. This point has implications for the whole debate on

methods in applied ethics. Stability requires the collaboration of different kinds of reasoning based on

32 It is beyond the scope of this paper to address the concerns of casuists, virtue ethicists, narrativists, and

pragmatists regarding deduction (but see my forthcoming book).

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

27

interpretation. I suggest organizing this collaboration around a deductive structure of norm application

that has a clear place for interpretation and that works perfectly fine with different kinds of norms,

with absolute and with non-absolute norms. To be very clear about that: I do not believe that this does

away with all problems we have in making decisions or in justifying them retrospectively, but what it

does is important enough: It provides a clear framework for norm application and norm development

and for how the two interact; furthermore, the deductive structure performs every function that

Richardson designed specification for—and some more. Using this collaboration, one can claim

sufficient stability within ethical theories while eliminating some of specification’s problems. My

suggestion is thus useful for theories such as principlism, which are generally aware of the need to

employ not only one but several methods. As Childress has put it: “Instead of viewing application,

balancing, and specification as three mutually exclusive models, it is better, I believe, to recognize that

all three are important in parts of morality and for different situations or aspects of situations, as well

as often intertwined and overlapping.” (2007, 29)

This awareness is important. I hope that my analysis sheds some light on the different methods

and their use, how they are “intertwined and overlapping”, and when one needs which method.

REFERENCES

Alexy, Robert. 1989. A Theory of Legal Argumentation. Oxford: Clarendon Press.

Alexy, Robert. 2002. A Theory of Constitutional Rights. Oxford: Oxford University

Press.

Arras, John. 2007. “The Way We Reason Now: Reflective Equilibrium in Bioethics.”

In The Oxford Handbook of Bioethics, edited by Bonnie Steinbock, 46-71. Oxford: Oxford

University Press.

Barak, Aharon. 2005. Purposive Interpretation in Law. Princeton, NJ: Princeton

University Press.

Barak, Aharon. 2006. The Judge in a Democracy. Princeton, NJ: Princeton University

Press.

Beauchamp, Tom L. 1984. “On Eliminating the Distinction Between Applied Ethics and

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

28

Ethical Theory.” The Monist 67: 514-531.

Beauchamp, Tom L. 2011. “Making Principlism Practical: A Commentary on Gordon,

Rauprich, and Vollmann.” Bioethics 25: 301-303.

Beauchamp, Tom L., and James F. Childress. 2013. Principles of Biomedical Ethics.

7th ed. Oxford: Oxford University Press.

Brewer, Scott. 1996. “Exemplary Reasoning: Semantics, Pragmatics, and the

Rational Force of Legal Argument.” Harvard Law Review 109: 923-1028.

Childress, James F. 2007. “Methods in Bioethics.” In The Oxford Handbook of

Bioethics, edited by Bonnie Steinbock, 15-45. Oxford: Oxford University Press.

DeGrazia, David. 1992. “Moving Forward in Bioethical Theory: Theories, Cases, and

Specified Principlism.” The Journal of Medicine and Philosophy 17: 511-539.

Gert, Bernard, Charles M. Culver, and K. Danner Clouser. 2000. “Common Morality

versus Specified Principlism: Reply to Richardson.” The Journal of Medicine and Philosophy 25:

308-322.

Grice, Herbert Paul. 1989. Studies in the Way of Words. Cambridge, MA: Harvard

University Press.

Hart, H.L.A. 1994. The Concept of Law. 2nd ed. Oxford: Clarendon Press.

Horty, John F. 2011. “Rules and Reasons in the Theory of Precedent.” Legal Theory

17: 1-33.

Klatt, Matthias, and Moritz Meister. 2012. The Constitutional Structure of

Proportionality. Oxford: Oxford University Press.

Koch, Hans-Joachim, and Helmut Rüßmann. 1982. Juristische Begründungslehre.

Munich: C.H. Beck.

Larenz, Karl. 1991. Methodenlehre der Rechtswissenschaft. 6th ed. Berlin/New York:

Springer.

MacCormick, Neil (1978). Legal Reasoning and Legal Theory. Oxford: Clarendon

Press.

MacIntyre, Alasdair. 1984. “Does Applied Ethics Rest on a Mistake?” The Monist 67:

DRAFT VERSION – The paper is forthcoming in the Kennedy Institute of Ethics Journal.

29

498-513.

Paulo, Norbert. Forthcoming. The Confluence of Philosophy and Law in Applied Ethics. Basingstoke:

Palgrave Macmillan.

Rauprich, Oliver. 2011. “Specification and Other Methods for Determining Morally

Relevant Facts.” Journal of Medical Ethics 37: 592-596.

Raz, Joseph. 2009. The Authority of Law. 2nd ed. Oxford: Oxford University Press.

Raz, Joseph. 2010. Between Authority and Interpretation. Oxford: Oxford University

Press.

Richardson, Henry S. 1990. “Specifying Norms as a Way to Resolve Concrete Ethical

Problems.” Philosophy & Public Affairs 19: 279-310.

Richardson, Henry S. 1995. “Beyond Good and Right.” Philosophy & Public Affairs

24: 108-141.

Richardson, Henry S. 1997. Practical Reasoning about Final Ends. Cambridge:

Cambridge University Press.

Richardson, Henry S. 2000. “Specifying, Balancing, and Interpreting Bioethical

Principles.” The Journal of Medicine and Philosophy 25: 285-307.

Strong, Carson. 2000. “Specified Principlism: What is it, and Does it Really Resolve

Cases Better Than Casuistry?” The Journal of Medicine and Philosophy 25: 323-341.

Sunstein, Cass R. 1993. “On Analogical Reasoning.” Harvard Law Review 106: 741-

791.

Tomlinson, Tom. 2012. Methods in Medical Ethics. New York: Oxford University

Press.