24
IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES Abstract. A number of recent arguments purport to show that imprecise credences are incompatible with accuracy-first epistemology. If correct, this conclusion suggests a conflict between evidential and alethic epistemic norms. In the first part of the paper, I claim that these arguments fail if we under- stand imprecise credences as indeterminate credences. In the second part, I explore why agents with entirely alethic epistemic values may end up in an indeterminate credal state. Following William James, I argue that there are many distinct alethic values a rational agent can have. Furthermore, such an agent is rationally permitted not to have settled on one fully precise value function. This indeterminacy in value will sometimes result in indeterminacy in epistemic behavior—i.e., because the agent’s values aren’t settled, what she believes may not be either. 0. Introduction Here are two brief but hard questions in epistemology: (1) What is the relationship between alethic and evidential norms? (2) How should an agent’s epistemic values affect her epistemic behavior? The first question is about the relationship between evidence and the truth. Presumably, following your evidence is generally a good guide to the truth, but truth and evidence can and often do come apart. Can the goal of truth ever come into conflict with the goal of following one’s evidence? The second question is about the relationship between an what an epistemic agent cares about on the one hand and what she ends up believing on the other. In the practical case, rational agents who like vanilla ice cream will behave dif- ferently from agents who prefer chocolate. Epistemic agents also have different epistemic values—some care about truth, while others care about explanation or justification. Should such differences lead to differences in what they think even when they have the same evidence? I don’t pretend to have full or complete answers to these questions. However, there has recently been an alleged conflict between two major programmes in formal epistemology whose resolution will, I think, at least shed some light on both. The first is the Imprecise Credences programme, which is motivated primar- ily by evidential considerations. According to orthodox philosophical bayesian- ism, rational agents ought to assign a precise level of confidence—standardly represented as a real number between 0 and 1—to each proposition under con- sideration. This requirement, simple and elegant as it is, seems like an undue restriction on rational doxastic states. Consider the claim that the person sitting next to you has at least three cans of garbanzo beans in her cupboard, or that Greece will leave the Euro by 2030, or that Homer was a woman. At first glance, it’s absurd to think that rational agents are required to assign exact numerical credences to any of these propositions. 1

Imprecise Epistemic Values and Imprecise Credences

  • Upload
    oxford

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

Abstract. A number of recent arguments purport to show that imprecisecredences are incompatible with accuracy-first epistemology. If correct, thisconclusion suggests a conflict between evidential and alethic epistemic norms.In the first part of the paper, I claim that these arguments fail if we under-stand imprecise credences as indeterminate credences. In the second part, Iexplore why agents with entirely alethic epistemic values may end up in anindeterminate credal state. Following William James, I argue that there aremany distinct alethic values a rational agent can have. Furthermore, such anagent is rationally permitted not to have settled on one fully precise valuefunction. This indeterminacy in value will sometimes result in indeterminacyin epistemic behavior—i.e., because the agent’s values aren’t settled, what shebelieves may not be either.

0. Introduction

Here are two brief but hard questions in epistemology:

(1) What is the relationship between alethic and evidential norms?(2) How should an agent’s epistemic values affect her epistemic behavior?

The first question is about the relationship between evidence and the truth.Presumably, following your evidence is generally a good guide to the truth, buttruth and evidence can and often do come apart. Can the goal of truth evercome into conflict with the goal of following one’s evidence?

The second question is about the relationship between an what an epistemicagent cares about on the one hand and what she ends up believing on the other.In the practical case, rational agents who like vanilla ice cream will behave dif-ferently from agents who prefer chocolate. Epistemic agents also have differentepistemic values—some care about truth, while others care about explanationor justification. Should such differences lead to differences in what they thinkeven when they have the same evidence?

I don’t pretend to have full or complete answers to these questions. However,there has recently been an alleged conflict between two major programmes informal epistemology whose resolution will, I think, at least shed some light onboth.

The first is the Imprecise Credences programme, which is motivated primar-ily by evidential considerations. According to orthodox philosophical bayesian-ism, rational agents ought to assign a precise level of confidence—standardlyrepresented as a real number between 0 and 1—to each proposition under con-sideration. This requirement, simple and elegant as it is, seems like an unduerestriction on rational doxastic states. Consider the claim that the person sittingnext to you has at least three cans of garbanzo beans in her cupboard, or thatGreece will leave the Euro by 2030, or that Homer was a woman. At first glance,it’s absurd to think that rational agents are required to assign exact numericalcredences to any of these propositions.

1

2 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

A more reasonable model allows for imprecise credences. Instead of using asingle probability function to represent a doxastic state, we allow for a set ofthem. On this approach, an agent’s level of confidence in some proposition canbe represented by, say, the interval �:2; :3�. Here, she is at least 20% confidentand at most 30% confident, but there’s no number x such that she’s exactly x%confident.

Imprecise bayesianism looks like a significant improvement over its orthodoxrival in evidential terms. The evidence we have for certain claims simply doesn’tsingle out a single credence over the others. At the very least we ought to permitagents to have set-valued credences, or so it seems.

Imprecise credences, however, seem incompatible with a second programme,called accuracy-first epistemology, which is motivated by alethic considerations.From this point of view, imprecise bayesianism looks like dead weight at best.Accuracy-firsters think that the only thing of final epistemic value is accuracy—proximity between doxastic attitude and truth-value. On the precise model,it’s relatively easy to cash out what this means: if p is true (false), the closeryour credence is to 1 (0), the more accurate it is, and the better off you areall epistemic things considered. In general, following evidential norms is good,but not because you care per se about obeying the evidence. Instead, obeyingevidential norms is a good means for attaining the end of accuracy.

It’s far less clear how to understand what accuracy could amount to whenwe move to an imprecise model. Is it more accurate to have a confidence levelof �:2; :3� in a falsehood than a confidence level of �:21; :29�? Is either of thosemore accurate than a precise credence of :26? It’s hard to say. As we’ll explore ina bit more detail below, a number of philosophers have argued that any attemptto measure the accuracy of imprecise credences will inevitably render them idleat best from an alethic point of view.

So now we’re in a bit of a quandary. If, as I do, you favour an accuracy-firstepistemology, then it appears you have no way of rationalising an imprecise-credence model. Given the intuitive attractiveness of imprecise credences froman evidential point of view, this seems like a cost. Conversely, if you favourimprecise credences, then it appears you have to deny that they have anythingto do with pursuit of the truth. The evidential norms you favour are, by yourown lights, at least partially disconnected from the aim of fitting your doxasticstate to the world.

The first goal of this paper is to propose one attractive way to resolve the ten-sion. The second goal is to explore how purely alethic epistemic value can affectepistemic behaviour. The basic picture goes as follows. Even if all a rational epis-temic agent should care about is accuracy, there are different reasonable waysof precisifying and pursuing accuracy. Rationality alone does not force agentsto choose any precise notion of accuracy nor any precise strategy for achievingaccuracy. Rational agents are therefore permitted to have indeterminate yetentirely alethic epistemic values.

Indeterminate values in turn lead to indeterminate credences, which we cannaturally identify with imprecise credences. On this interpretation, when wesay an agent has imprecise credence �:2; :3� toward some proposition X, wedon’t mean that her credence is literally the interval �:2; :3�. Instead, we meanthat there’s no fact of the matter whether her credence in X is really :22, :29,

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 3

or any other element of �:2; :3�.1 This understanding of imprecise credences asindeterminate credences not only escapes the incompatibility arguments butalso provides a positive account of how they fit with accuracy-first epistemology.In a slogan: Indeterminate values generate indeterminate credences.

To be clear, the aim is not to propose a knock-down argument for impre-cise credences nor for interpreting them as indeterminate. Indeed, there willbe a number of optional choice points along the way. Instead, the goal is topresent a plausible and well-motivated picture of epistemic value that rendersaccuracy-first epistemology and imprecise credences compatible and that allowsfor differences in epistemic value to affect epistemic behaviour.

Here’s the plan. §1 presents imprecise credences, accuracy-first epistemology,and the basic argument for their incompatibility. §2 motivates the indeterminateinterpretation of imprecise credences and shows why such an interpretationevades the incompatibility arguments. §3 explores the nature and variety ofalethic epistemic value, demonstrates that indeterminate values lead to inde-terminate credences, and discusses how we ought to make sense of imprecisecredences from an accuracy-first perspective. §4 contrasts our account of impre-cise credences with some alternatives. First we show that on the accuracy-firstview, unlike on more orthodox accounts, agents with imprecise credences neednot update by pointwise conditionalisation. Second, we argue that the impreciseview we’ve developed has some advantages over permissive bayesianism, ac-cording to which agents must select a single precise credence of their choosingfrom a set of maximally rational alternatives. §5 wraps up.

1. Imprecise Bayesianism and Accuracy-First Epistemology

1.1. Motivating Imprecise Credences. We often have evidence that is incom-plete and non-specific. Consider, for instance, the following example from (Joyce2010, 283):

Black/Grey: An urn contains a large number of coins which have beenpainted black on one side and grey on the other. The coins were madeat a factory that can produce coins of any bias � : �1� �� where �, theobjective chance of the coin coming up black, might have any value inthe interval 0 < � < 1. You have no information about the proportionswith which coins of various biases appear in the urn. If a coin is drawnat random from the urn, how confident should you be that it will comeup black when tossed?

Precise bayesians differ on exactly what you should think about the claimthat the coin will come up black, which we’ll refer to as B. Some are subjectivistsand think that any number of opinions consistent with the case are permissible,e.g., any level of confidence between 0 and 1.2 Others are objectivists, who inthis case, at least, will usually advocate that you adopt a uniform prior overvalues of � and end up assigning credence 1=2 to B.3

1This account of imprecise credences is developed in (Rinard 2015).2The most prominent example is (de Finetti 1964).3Although objective bayesians tend to agree that you ought to have credence 1=2 in this case, theydisagree in general about how to choose a credence function. For a variety of different flavours ofobjective bayesianism, see (Carnap 1950; Jaynes 2003; Rosenkrantz 1981; Solomonoff 1964a,b;Williamson 2010).

4 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

However, both of these responses require the agent to take a more definitestand on matters than the evidence alone seems to justify. Imprecise bayesiansargue that, at the least, the nature of the evidence permits the agent not tohave a completely precise attitude toward B. Her evidence is unspecific andincomplete, and her opinion can be too. Joyce (2010), for instance, argues:

[A uniform prior] commits you to thinking that in a hundredindependent tosses of the black/grey coin the chances of blackcoming up fewer than 17 times is exactly 17/101, just a smidgen (� 1=606) more probable than rolling an ace with a fair die. Do youreally think that your evidence justifies such a specific probabil-ity assignment? […] Or, to take another example, are you com-fortable with the idea that upon seeing black on the first tossyou should expect a black on the second toss with a credenceof exactly 2/3, or, more generally, that seeing s blacks and N � sgreys should lead you to expect a black on the next toss with aprobability of precisely s�1=N�2? […] Again, the evidence youhave about the coin’s bias (viz., nada!) is insufficient to justifysuch a specific inductive policy. Of course, any sharp credencefunction will have similar problems. Precise credences, whetherthe result of purely subjective judgments or “objective” rules[…] always commit a believer to extremely definite beliefs aboutrepeated events and very specific inductive policies, even whenthe evidence comes nowhere close to warranting such beliefsand policies. (pp. 283-4)

One more general objection to precise credences that goes to the core ofJoyce’s point is that they require agents to have an opinion about the com-parative likelihood of each proposition under consideration. You must haveyour mind made up about whether X is more likely than Y , Y more likely thanX, or that X and Y are equally likely. This requirement for totality sometimesgoes beyond your evidence—you may have no basis for having a firm opinionabout their relative likelihoods. Precise credences are thus in some evidentialcircumstances at best optional and at worst irrational according to imprecisebayesians.

For what follows, we’ll look at the former option, i.e., making sense of thepermissibility of imprecise credences. That is, the thesis we’ll try to make senseof from an accuracy-first perspective is:

Imprecise: In the face of certain kinds of evidence, it is sometimes ratio-nally permissible to adopt imprecise credences.

This permissive version of Imprecise seems to me well-motivated from anevidentialist perspective. That is, imprecise credences seem like a perfectlyrationally appropriate response to some kinds of evidence. However, as we’llsee over the next few pages, it’s far harder to make sense of them from analethic perspective.

1.2. Accuracy-First Epistemology. There are lots of reasons a particular credalstate might be epistemically good. It might, for instance, be justified by the evi-dence, or informative, or coherent, or explanatory. Crucially, it could also turn

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 5

out to be highly accurate. The last of these properties, according to accuracy-first epistemology, is what really matters. Accuracy is the sole source of epis-temic value. The higher your credence in truths, and the lower your credencein falsehoods, the better off you are all epistemic things considered.4

Many norms of interest in epistemology—probabilism, conditionalisation, theprincipal principle, and so on—don’t tell us directly to be accurate. So, accuracy-firsters must justify them through their connection with the rational pursuitof accuracy. Either violating them renders agents unnecessarily inaccurate, orfollowing them is somehow a good means toward the end of accuracy.

For AFE to be successful, we then need an account of accuracy and explicitprinciples of rational choice. That is,we need to be able to show how an epistemicnorm is part and parcel of the rational pursuit of epistemic value. We’ll first lookat how we might carry this idea out in the context of precise bayesianism andthen see why apparent problems arise when we extend it to imprecise credences.

1.3. The Epistemic Utility Programme. Because accuracy functions as a mea-sure of a credence’s epistemic value at a world, it’s natural to treat accuracy asepistemic utility akin to practical utility. We can then turn to decision theory todiscover candidate principles of rational choice to derive epistemic norms ofinterest. This idea is best illustrated with an example.

1.3.1. Probabilism. Joyce (1998, 2009) argues that a (precise) agent’s credencefunction should obey the axioms of probability as follows: Suppose Bob hascredence function b, which doesn’t obey the axioms of probability. Then onany legitimate measure of accuracy, there’s some probability function b0 that isstrictly more accurate at every world than b is. That is, on every legitimate mea-sure, every non-probabilistic credence function b is strictly accuracy-dominatedby some coherent function. Furthermore, no probability function is even weaklyaccuracy-dominated.5

Joyce then argues that only non-dominated options are rational.6 Therefore,only probability functions are candidates for being rationally permissible cre-dence functions.

More explicitly, Joyce’s argument works as follows:

(I) The epistemic value of a credence function b at a world w is given byu�b;w� for some u 2 U, where U is the set of legitimate accuracymeasures.7

(II.a) For any non-probabilistically coherent b, there’s a b0 such that b0 is aprobability function and b0 u-dominates b.

(II.b) There is no probability b0 that is u-dominated by any function.(III) If a credence function b is u-dominated by an alternative credence func-

tion b0 that is itself undominated, then b is irrational.

4We’ll be interested in the most recent wave of accuracy-first epistemology as it applies to partialbelief. However, since it cares only about truth, it is a species of veritism. See (Goldman 1986) fora classic presentation of veritism in traditional epistemology.5I.e., if c is a probability function, then there’s no c0 that’s at least as accurate as c at every worldand strictly more accurate at some world.6More precisely: Choosing a dominated option is irrational assuming there are some options thataren’t dominated.7For now, we leave open whether there’s a single correct measure.

6 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

(C) Therefore, all non-probabilistic credence functions are irrational.

(I) pins down potentially legitimate measures of total epistemic value. (II) isa mathematical theorem. (III) is a principle of rational choice that connectsepistemic behaviour and epistemic value.

There are two clear ways to derive additional epistemic norms. The first is tochange the principle of rational choice. Greaves and Wallace (2006) argue forconditionalisation, for example, by appealing to expected utility maximisationinstead of dominance.8 That is, on the same measures of epistemic utility thatJoyce appeals to, the updating policy that leads to the most expected epistemicaccuracy is conditionalisation.

The second is to change the class of legitimate epistemic utility functions,i.e., measures of accuracy. As we’ll see in a moment, however, epistemic utilitytheorists tend to agree on certain constraints that any good measure mustsatisfy.

1.3.2. Measuring Inaccuracy. Before seeing why imprecise credences and ac-curacy-first epistemology are supposed to be incompatible, it will be useful toget a bit clearer on what exactly accuracy-firsters mean by accuracy. For ourpurposes, we need not take issue with the particular constraints on legitimatemeasures they endorse, so instead we here just survey some general principlesalong with some brief motivation.

First, a quick and non-substantive simplification. For technical convenience,it’ll be easier to use measures of inaccuracy or negative accuracy. That is, weseek constraints on acceptable measures divergence from truth-value, insteadof proximity to truth-value. According to AFE, rational agents seek to minimisethis divergence.

Now back to the main question. If there’s one non-negotiable principle of agood measure, it’s that credences closer to truth-values are less inaccurate. Acredence of :8 in a truth, for example, is less inaccurate than a credence of :7in the same proposition. Likewise, an entire credence function that’s uniformlycloser to the truth than another is overall less inaccurate.

We now cash this out formally using standard possible world semantics. LetW be a set of worlds, and F be a set of propositions over W . We let w�X� � 1(w�X� � 0) if X is true (false) at w. bel�F� is the set of belief functions overF , where a belief function assigns some number x in [0,1] to each propositionin F . If a belief function obeys the Kolmogorov axioms, then it’s a probabilityfunction.

A measure of inaccuracy (also known as a scoring rule) I is a function frombel�F��W ! R�0 that is intended to measure how close a belief function is tothe truth at a given world. The fundamental constraint on legitimate inaccuracymeasures is then the following:

Truth-Directedness: If jb�X��w�X�j � jc�X��w�X�j for all X, andjb�Y��w�Y�j < jc�Y��w�Y�j for some Y , then I�b;w� < I�c;w�.

8There are also structurally similar accuracy-firster arguments for a number of other epistemicnorms, including but not limited to: the Principal Principle (Pettigrew 2013), the Principle ofIndifference (Pettigrew 2014), the Principles of Reflection and Conglomerability (Easwaran 2013),and norms governing disagreement (Moss 2011; Levinstein 2015).

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 7

Unfortunately, Truth-Directedness alone isn’t sufficient for the argumentsof epistemic utility theory to work. To see why not, consider the absolute-valuemeasure:

abs�b;w� �XX2F

jb�X��w�X�j

abs is clearly truth-directed since it simply sums up the absolute differencesbetween credences and truth-values. However, under abs, some probability func-tions are dominated by non-probability functions.9

There is a long and ongoing discussion in the literature about which con-straints in addition to Truth-Directedness are philosophically justified.10

However, we can at least mathematically characterise which measures will work.The most important additional constraint is one that requires a certain kind ofimmodesty. The idea is that if you have a credence of x in some proposition,you ought to expect x to be the least inaccurate of the alternatives. Otherwise,you could never rationally hold x as a credence, since it would automaticallycome out as dis-preferred to some alternative. In other words, you’d hold cre-dence x toward a proposition while simultaneously thinking some alternativecredence x0 was less inaccurate.11 Likewise, every probability function shouldexpect itself to be the least inaccurate. More formally:

Propriety: If I is a legitimate measure of inaccuracy andb is a probabilityfunction, then for all distinct credence functions c, we have:

EbI�b� < EbI�c�where Eb denotes the expected value function according to b.

So, Propriety says that each probability function assigns itself lowest expectedinaccuracy according to any legitimate measure.

For our dialectical purposes, we can just accept this requirement, since we’llbe arguing below that imprecise bayesians should claim that there are multipledistinct measures of accuracy that are legitimate, and Propriety narrows ratherthan expands the space of legitimate measures.12

1.3.3. An Example. Later on, we’ll explore various proper measures in moredetail. For now, let’s look at a single example for the sake of concreteness. Onenatural way to score an agent’s credal state at a world is to identify inaccuracywith mean squared error. I.e.,

Brier Score: BS�b;w� � 1jFj

XX2F

�w�X�� b�X��2

The Brier score simply sums up the square of the difference between the agent’scredence in each proposition and then takes the average.

9Imagine, for instance, an urn had a Red ball, a Green ball, and a Blue ball, one of which will bedrawn at random (i.e., each with a 1/3 chance). An agent with credence 0 in each of the threepropositions Red, Green, and Blue is guaranteed to be less inaccurate under abs than an agentwith a credence of 1/3 in each. The agent with a credence of 0 in all three will receive a total scoreof 1, whereas an agent with credence 1/3 in all three will receive a score of 4=3 > 1.10See, for example, (Joyce 1998, 2009; Leitgeb and Pettigrew 2010; Levinstein 2012; Predd et al.2009; Pettigrew 2016; Selten 1998).11Compare: ‘I believe X, but I think a belief in :X is more accurate.’12For a direct defense of Propriety see (Joyce 2009).

8 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

Sometimes, it will also be natural to score individual credences instead ofentire credal states. We can likewise evaluate a particular credence simply bylooking at its squared error. I.e.,

Local Brier Score: BS�x; i� � �i� x�2Here, x is the agent’s credence, and i is the truth-value of the proposition inquestion. Context should make it clear below whether the local or global versionis intended.

The Brier Score (along with infinitely many other strictly proper measures ofinaccuracy) will vindicate Joyce’s argument for probabilism along with the otheraccomplishments of epistemic utility theory. That is, if we use the Brier Score asa measure of epistemic disutility, then all and only non-probability functions aredominated, conditionalization is the policy that minimises expected epistemicdisutility, and so on.

1.4. The Incompatibility Argument. Let’s now turn to the primary challengeof reconciling accuracy-first epistemology with imprecise credences.

Recently, a number of authors have published a variety of impossibility re-sults that aim to show that, in fact, accuracy-first epistemology and imprecisecredences are incompatible (Seidenfeld et al. 2012; Mayo-Wilson and Wheeler2015; Schoenfield 2015). More specifically, any way of measuring the inaccuracyof imprecise credal states is sure to yield an unattractive result.

The formal arguments themselves can be rather nuanced, but I’ll here providea simplified version without any bells and whistles. The aim isn’t to presentanything water-tight (or nearly as water-tight as those found in the paperscited) but instead to give the reader a basic grasp of why reconciling AFE withimprecise credences looks especially challenging.13

Let’s compare the following cases:

Mystery Coin: The only evidence you have that is relevant to whetherHeads is that the objective chance of Heads is between 0.05 and 0.95.

Fair Coin: You know the chance of Heads is exactly :5.

Suppose in Mystery Coin, you adopt imprecise credal state �:05; :95� towardHeads, whereas in Fair Coin you adopt a precise state of :5. In each case, thereare only two outcomes: one in which Heads is true, and one in which Headsis false. Inaccuracy is a function just of your credal state and how the worldis. In particular, your actual level of inaccuracy doesn’t depend at all on thebackground evidence you have.

Now, let’s make the reasonable assumption that on any good measure ofinaccuracy, I�:5;1� � I�:5;0�. That is, if you have credence :5, your level ofinaccuracy is fixed.14 For instance, on the Brier Score, your inaccuracy is :25regardless of whether Heads or Tails.

Similarly, it seems, I��:05; :95�;1� should be the same as I��:05; :95�;0�. In-tuitively, you’re not any better off, from an alethic perspective, if Heads orTails. The interval �:05; :95� doesn’t seem to favour one conclusion over theother. Let’s suppose, then, that I��:05; :95�;1� � I��:05; :95�;0� �m and thatI�:5;1� � I�:5;0� � s.

13The argument I give most closely follows that found in (Schoenfield 2015).14More sophisticated variations of the argument can do away with this assumption.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 9

Now, we might not want m to be a single number. After all, we might notwant imprecise states to get precise numerical scores. So, we’ll just assume thatone of the following holds:

(1) m is a better score than s,(2) m is a worse score than s,(3) m is neither better nor worse than s

Ifm is a better score than s, we’re in trouble. In that case, it must always beirrational to have credence :5 in any proposition at all. After all, since accuracyjust depends on your credal state and how the world turns out, you’d do betterin Fair Coin by adopting a credal state of �:05; :95� instead of :5. In otherwords, if m is better than s, :5 is accuracy-dominated and therefore alwaysless epistemically valuable than a credence of �:05; :95�. Surely, however, it’ssometimes rational to have a credence of :5 in some propositions (e.g., if youknow a coin is fair). So, m can’t be better than s.

By analogous reasoning, if s is better than m, then it’s irrational to have animprecise credence of �:05; :95� no matter what. A generalised version of thisargument then rules out imprecise credences as rationally permissible. Thisoption gives up the game. Those hoping to reconciling imprecise credenceswith AFE can’t accept that m is worse than s either.

If s is neither better nor worse thanm, then imprecise credences seem to dono alethic work. That is, an accuracy-seeking agent would never have reason toprefer an imprecise credence to some precise one.

However, the situation is in fact worse than that from an accuracy perspec-tive, as Schoenfield (2015) points out. In Fair Coin, having credence �:05; :95�toward Heads violates the Principal Principle, which is supposed to be a rationalrequirement. However, if s is neither better nor worse than m, then it’s neverdeterminately better to be in credal state �:05; :95� than in credal state :5 to-ward some proposition. So, this option allows for violations of the PrincipalPrinciple. If we assume that the Principal Principle is better established thaneither accuracy-first epistemology or imprecise credences, one of the latter twoshould go.

To respond to this argument, we should first look at two subtly differentways of understanding what imprecise credal states amount to.

2. Indeterminacy and Imprecise Bayesianism

2.1. The Formal Representation. As mentioned above, when we’re interestedin an imprecise agent’s attitude toward a single proposition, we can representit with a set or interval. For instance, in Mystery Coin, we might represent herwith the interval �:05; :95�.

We can also use sets of probability functions to represent her entire doxas-tic state over more than one proposition. Suppose, for instance, Alice has animprecise credence �:2; :3� in X, imprecise credence �:3; :4� in Y , and precise cre-dence :8 toward Z . How, formally, should we capture exactly what she thinks?Assuming she has no other precise views, Alice’s doxastic state R (called herRepresentor) is

fc 2 Prob : c�X� 2 �:2; :3�; c�Y� 2 �:3; :4�; c�Z� � :8g

10 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

On the orthodox picture, facts about Alice’s opinion correspond to facts thatare true of each element ofR.15 For instance, since every element ofR assignsa lower precise probability to X than to Z, Alice thinks X is less likely than Z.However, since some credence function assigns :3 to both X and Y , Alice hasno precise opinion as to whether X or Y is strictly more likely.

2.2. Interpretation. There are two importantly distinct ways to interpret howR represents Alice’s doxastic state:

Determinate: Alice determinately identifies with the set of probabilityfunctions R.

Indeterminate: It’s determinate that Alice’s credence function is a mem-ber of R, but it’s indeterminate which member of R it is.

On the first option, there’s no indeterminacy at all. Alice’s doxastic state simplyis the setR. On the second, there’s no fact of the matter what her doxastic stateis exactly.

The orthodox view above fits naturally with the Indeterminate interpre-tation. After all there is a clear analogy to supervaluationist semantics. Onthe supervaluationist approach, vague terms have admissible and inadmissi-ble precisifications—roughly, reasonable and unreasonable ways of completelydisambiguating the term. If, on every admissible precisification, a propositioncomes out true, then it is determinately true. If, on every admissible precisifi-cation, it comes out false, then it is determinately false. If it comes out true onsome but not other precisifications, then it is indeterminate whether it is trueor false.

As Rinard (2015) points out, imprecise credences behave very much likevague concepts under supervaluationism:

We can apply [the] supervaluationist strategy to doxastic impre-cision by seeing each function in your set as one admissibleprecisification of your doxastic state. Functions excluded fromyour set are inadmissible precisifications. Whatever is true ac-cording to all functions in your representor is determinately true;if something is true on some, but not all functions in your repre-sentor, then it’s indeterminate whether it’s true. For example, ifall functions in your set have b�A� > b�B�, then it’s determinatethat you’re more confident of A than B. If different functions inyour set assign different values to some proposition P , then foreach such value, it’s indeterminate whether that value is yourcredence in P . (p. 2, minor changes)

In other words, the orthodox interpretation is at least structurally supervalua-tionist, as it treats each function in Alice’s representor the same way supervalu-ationism treats precisifications in vagueness. So, the Indeterminate reading isat least a natural one and worth exploring further in the context of accuracy-firstepistemology.16

15See, for instance, (Hájek 2003; Levi 1985; Joyce 2005, 2010; van Fraassen 1990; Walley 1991).16Aside from (Rinard 2015), however, not much has been written explicitly on whether to endorseDeterminate or Indeterminate. That is, few proponents of ICs have said directly whetherAlice’s doxastic state is really her representor itself, or whether instead the representor is merelythe set containing admissible precisifications.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 11

2.3. The Incompatibility Arguments Revisited. Let’s now return to the incom-patibility arguments presented above. They show that if we assign a score ofm to an imprecise credence of �:05; :95� and of s to :5 in Mystery Coin weapparently have no good options.

Now, this seems like a big problem on the Determinate view. If Alice’sdoxastic state in Mystery Coin is a set, then we need some way to score thatset as a whole. That is, we need some way of saying when an accuracy-seekingagent ought to prefer being in the set-valued doxastic state to being in a precisecredal state. So, given the truth of Determinate, we’re in trouble.17

However, on the Indeterminate view, Alice’s attitude toward Heads in Mys-tery Coin isn’t really the interval �:05; :95�. More precisely: her doxastic stateisn’t some set of functions that assign credence between :05 and :95 to Heads.Instead, it’s indeterminate what her credal state is. So, assigning a score to thewhole interval is, on this view, simply a category mistake. Instead, there’s no factof the matter how inaccurate Alice is, since it’s indeterminate which credencefunction is hers.

Indeterminate thus escapes the incompatibility arguments. It denies thepresupposition that there’s some way or other to score a representor as a whole.However, by leaving an agent’s inaccuracy score indeterminate, it doesn’t tellus why imprecise credences might be at all desirable to an accuracy-firster.

The answer I’ll develop below is based on the claim that Alice should bepermitted to have indeterminate alethic values. Although all she ought to careabout, epistemically speaking, is accuracy, there isn’t any single way she caresabout accuracy, nor any precise notion of accuracy she places above all others. Ifher values are thus indeterminate, she can end up with indeterminate credencesas a result. Let’s explore this idea further now.

3. Imprecise Epistemic Values

At first blush, it seems that accuracy-first epistemology has already settledthe question of epistemic value. All an agent should care about is having herdoxastic state come close to matching actual truth-values. Epistemology therebybecomes a matter of determining what sorts of epistemic actions and policiesare or are thought to be most truth-conducive. Quine (1986), for instance, oncetook a view of this sort:

Normative epistemology is a branch of engineering. […] It is amatter of efficacy for an ulterior end, truth. […] The normativehere, as elsewhere in engineering, becomes descriptive when theterminal parameter is expressed. (pp. 664-5)

17Konek (2015) provides one way of scoring set-valued credences. On his approach, an agentwith representor R receives a numerical score which is a weighted average of the least and mostinaccurate members of R. E.g., for � 2 �0;1�, we have:

Konek-Brier�: KBS��a; b�; i� � � �minx2�a;b� BS�x; i�� �1��� �maxx2�a;b� BS�x; i�Depending on the value of �, precise credences can be guaranteed to do worse than some im-precise credences. For instance, setting � � 3=4 will make the set-valued credence �:05; :95�dominate :5. For Konek’s approach to work, then, the weights have to change depending on theevidential situation, otherwise some agents would end up prohibited from ever adopting a precisecredence regardless of the background evidence.

12 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

However, as we’ll see shortly, merely caring about truth as the final end ofepistemic action is not sufficient to make epistemology merely a problem of en-gineering. Instead, as William James famously argued, much remains unsettled:

There are two ways of looking at our duty in the matter of opin-ion.[…] We must know the truth; and we must avoid error, —these are our first and great commandments as would be know-ers; but they are not two ways of stating an identical command-ment […] Believe truth! Shun error! — these, we see, are two mate-rially different laws; and by choosing between them we may endby coloring differently our whole intellectual life. We may regardthe chase for truth as paramount, and the avoidance of error assecondary; or we may, on the other hand, treat the avoidance oferror as more imperative, and let truth take its chance. (1896,§VII)

One way to bring out the distinction between the two great commandments inthe context of full belief is to notice that each commandment is individually easyto satisfy. An agent can believe all truths simply by believing all propositions,yet she is thereby sure to violate the commandment to shun error. Likewise, anagent can suspend belief about each proposition and avoid error, yet she givesup the chance to believe truths.

A similar lesson applies to precise credences. An agent who’s opinionated—with credences close to 0 or 1—has the chance to be extremely accurate, butshe also risks great inaccuracy. In turn, an agent with credences closer to themiddle of the spectrum protects herself from alethic disasters, but she also hasno chance for very low inaccuracy. Deciding on a credence requires an agent tostrike a balance between these two great alethic commandments.

We’ll use a broadly Jamesian theme to develop an accuracy-first approachto imprecise credences. We’ll first look at two ways in which alethic valuescan plausibly rationally differ: just as with practical decision theory, an agent’schoices can be affected either by the precise nature of her utility function or bythe method she uses to choose among her available options. If it’s rationallypermissible not to have fully determinate epistemic values, then it’s rationallypermissible to have imprecise credences. Thus, indeterminate values generateindeterminate credences.

3.1. Scoring Rules and Epistemic Value. In this section, we examine precisemeasures of alethic epistemic value. As mentioned above in §1.3.2, there arean infinite variety of measures of inaccuracy, known as proper scoring rules.In accuracy-first epistemology, proper scoring rules play the role of epistemicdisutility functions, which reflect an agent’s alethic values (Joyce 2009; Moss2011; Pettigrew 2016; Konek and Levinstein 2017).

Scoring rules are purely alethic, in the sense that they are truth-directed andsimply measure divergence between credence and truth-value. Nonetheless,they encapsulate different notions of value. First, we’ll see that different scoringrules disagree about the rank-order of epistemic options. That is, they disagreeabout which credence functions an agent should prefer at which worlds. Second,they importantly disagree about the cardinal level of epistemic risk involved inepistemic decisions. Third, no scoring rule in particular seems to give an answer

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 13

that’s clearly better than the others. The upshot is that if no one scoring ruleis privileged, the notion of accuracy that rational agents should care about isitself imprecise.

3.2. Examples. Although infinitely many different scoring rules satisfy theneeded constraints, we’ll focus on three popular measures that will do thejob accuracy-firsters want.18 That is, these measures will, combined with theright decision-theoretic principles, underwrite arguments for probabilism, con-ditionalisation, the Principal Principle, and so on.

Brier Score: BS�x; i� � �i� x�2Log Score: Log�x; i� � � ln�j�1� i�� xj�Spherical Score: Sph�x; i� � 1� j1� i� xj=�x2 � �1� x�2�1=2

As before, x is the agent’s credence and i � 1;0 depending on whether theproposition in question is true or false. As we saw earlier with the Brier Score,we can easily generate global versions of each score simply by averaging each ofthe local scores. Let’s now see how these scoring rules encode different alethicvalues.

3.3. Ordinal Differences. We begin with ordinal differences. Consider the fol-lowing case:

Lottery: An urn contains four balls: A, B, C , and D. As a matter of fact,A is chosen.

Suppose Alice, Bob, and Carol have the credences shown in Table 1 over whichball was selected:

A B C DAlice .005 .275 .230 .490Bob .033 .127 .137 .703Carol .033 .474 .088 .405

Table 1. Credences in Lottery

Given thatA was actually chosen, who among our three characters is more ac-curate than who? Or, put differently, who is—from a purely alethic perspective—better off epistemically?

It’s hard to say. On the one hand, Alice has the lowest credence in the trueproposition A, so in one respect she’s doing worst. However, Bob has a very highcredence (.703) in the false proposition D, whereas Alice’s highest credence ina false proposition is just .49. So, it’s not clear whether Alice is more or lessinaccurate than Bob. What about Alice versus Carol? Again, it’s not clear. Carolis definitively more accurate than Alice on A,C , andD, but is much less accurateon B. No one ordering leaps out as the one all rational agents must agree upon.

As you may suspect, our quantitative measures of inaccuracy also disagreeabout the ordering. According to the Brier Score, Carol is least inaccurate,followed by Alice and then Bob. According to the Log Score, Carol is again leastinaccurate, followed by Bob, and then Alice. And according to the SphericalScore, Alice is least inaccurate, Carol is second, and Bob’s in last.

18For further discussion, see (Joyce 2009).

14 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

Each of these rules is truth-directed and proper. Nonetheless, they disagreeabout the accuracy rank-order in this case. So, agents with different scoringrules will disagree about which credence function is preferable to which. Moreabstractly: agents who differ in their alethic values can disagree about theirepistemic preferences even when the state of the world is known.

Now, if we agree that different orderings are reasonable, there’s a furtherquestion of whether rational agents are nonetheless obligated to make up theirminds as to which among Alice, Bob, and Carol is more accurate than whichother. More generally:

Totality: Given a world w and two credence functions b1 and b2, arational agent would either prefer to have b1 to b2 as her credencefunction, prefer b2 to b1, or be indifferent between b1 and b2.

According to this principle, agents must have, in effect, a single epistemic disu-tility function that completely ranks each credence function at each world.

However, it’s not clear why Totality is a rational requirement even for agentswho just care about seeking truth. After all, seeking accuracy seems fundamen-tally to mean that you prefer higher credences in truths to lower credences infalsehoods. That may well be rationally required. But that preference leavesa lot left to be determined, and there doesn’t, as yet, seem to be compellingepistemic reason to force agents to form a total preference ranking. Withoutspecific reasons to the contrary, we can assume that some rational agents maynot have any definitive views about whether Alice, Bob, or Carol is best off inthis situation.19

3.4. Cardinal Comparisons. Scoring rules can also differ greatly on the amountof epistemic risk involved in epistemic decisions. For simplicity, let’s focus onhow the Brier Score and the Log Score evaluate a single credence in a propo-sition H. Suppose Alice has credence :01 in H, and Bob has credence :001 inH.

x BS�x;1� BS�x;0� Log�x;1� Log�x;0�Alice .01 .98 10�3 4.6 10�2

Bob .001 .998 10�6 6.9 10�3

Table 2. Approximate Brier and Log Scores

Every truth-directed scoring rule agrees what the rank-order is at each worldin this case. However, the Brier Score and Log Score disagree about howrisky each credence is. To see this, we can look at the ratio of how much Alicestands to gain or lose in inaccuracy if she were to switch to Bob’s credence. Inthe case of the Brier Score, if she adopted Bob’s credence and H turned outfalse, her score would improve by 10�3 � 10�6. If, however, H turned out true,she’d increase (i.e., worsen) her score by :998� :98. The ratio of possible gain ofinaccuracy to loss is approximately 18:1. That’s risky, to be sure, but not nearlyas risky as the same change in credence is on the Log Score: around 256:1.So, in comparison, the Log Score is much more sensitive to small changes in

19As we’ll see in §4.2, denying Totality comes with some alethic and evidential advantages.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 15

Figure 1. The Local Brier, Log, and Spherical Scores. The as-cending curves represent I�x;0� and the descending curvesrepresent I�x;1� for the respective scoring rules. Note that theBrier and Spherical Scores are bounded by 1, but the Log Scoreis unbounded.

credence around 0. If H turns out true, on the Log Score, Bob is much, muchworse off than Alice.

Depending on what one cares about, either of these risk profiles can seemreasonable. On the one hand, both Alice and Bob are very confident that :Hin absolute terms. Since :01� :001 � 0, this way of looking at their credencespushes us to assign them relatively similar scores. The Brier Score recognisesthem as similar credences. On the other hand, Alice is ten times more confidentin H than Bob is. Looked at this way, it seems Alice should count as a lot moreaccurate than Bob when H turns out true.20

This point about the difference in risk-profile generalises: If an agent movesher credence in proposition X from x to x� �, how much she stands to gain orlose if X is true will depend on the value of x and the scoring rule.21 One canget some sense of the different risk profiles of the various scoring rules we’veconsidered by examining Figure 1.

Rationality seems to give us some leeway. Seeking the truth and avoidingerror are compatible with different ways of balancing the two goals. Rationalagents, it seems, need not make up their minds entirely about the exact details.However, as we’ll see, we have more work to do before such imprecise valuescan lead to imprecise credences.

3.5. An Oddity for Precise Bayesianism. We’ve now shown how importantscoring rules are to the epistemic utility program and how they reflect different

20One evocative but illicit way to build these two intuitions is to consider different types of claims.A forecast of a 1% chance of rain seems roughly as accurate as a forecast of a 0% chance. However,a forecast of a 1% chance of death is much different from a forecast of a 0% chance.21Each scoring rule is generated by an underlying measure on the unit interval. This measurerepresents, roughly, how important it is to have one’s credence on the correct side of a given pointin the interval. Some rules care more about the middle of the probability spectrum (Spherical),some care more about the ends (Log), and some don’t privilege any particular part of the spectrum(Brier). For an extended discussion, see (Levinstein 2017).

16 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

kinds of alethic values. Supposing there is some rational leeway for agents tochoose between these different values, it seems such values would naturallyinfluence epistemic behaviour. For instance, it seems that agents who use theLog Score would naturally be a bit more skittish about lowering a credencefrom :01 to :001 than would agents who use the Brier Score.

However, the story is a bit more complicated than that. The problem hereis that epistemic utility theorists require legitimate inaccuracy rules to obeyPropriety. That is, on every acceptable rule, each probability function expectsitself to be the least inaccurate. If a scoring rule isn’t strictly proper, then itwon’t underwrite the main achievements of epistemic utility theory. For instance,probabilistically coherent credence functions won’t dominate incoherent ones.

The requirement that scoring rules be strictly proper has severe effects onthe role scoring rules can play in the heart of precise bayesian epistemology.22

Let’s see why. Suppose Bob has credence function b and at first uses the BrierScore to measure his epistemic disutility. Bob then has an epistemic changeof heart and decides the Spherical Score really captures his alethic valuesbetter. Now that he’s converted to the spherical rule, how does his epistemicbehaviour change? Answer: not at all! By Bob’s lights, b is still the function thathe expects to minimise inaccuracy. So, despite the change in value, Bob stickswith the credence function he already had.

This apparent epiphenomenalism of epistemic value is more surprising, per-haps, when we consider the intuitive effects of epistemic values on learning.Suppose Carol begins life as an epistemic risk-taker. Although Carol only up-dates by conditionalisation, her credence function ‘learns’ quickly in the sensethat, without all that much evidence, she tends to arrive at credences close to 1or 0. Carol is aware of the Jamesian tradeoff at play between the exhortationsto Believe truth! and to Shun error!—although she can quickly become accu-rate, she also risks massive inaccuracy. Naturally, Carol uses a scoring rule thatreflects these values, such as the Brier Score.

As Carol gets older, she grows more conservative. She just doesn’t havethe same tolerance for error she did when she was young. Although the epis-temic highs of low inaccuracy are great when she gets things right, now thatshe’s aged, getting things massively wrong has become more punishing. Carolswitches over to a more conservative scoring rule—the logarithmic function.What happens? Again, nothing. By Carol’s lights, planning to update her currentcredence function by conditionalisation minimises expected inaccuracy. So thechange in value doesn’t manifest itself in any change in epistemic behaviour.

So, since bayesianism at its core says to be probabilistically coherent andupdate by conditionalisation, how could your alethic values in any way influenceyour alethic behaviour? The answer, I think, is that they can affect how youchoose a credence to have in the first place.23 When looking at your evidence,you don’t always have any credence—precise or imprecise—at all. You need tolook at the available options and pick one that’s attractive. How you select your

22See (Horowitz 2015) for an argument that different weightings of Jamesian goals can’t be madesense of in the context of epistemic utility theory for precise bayesianism.23More carefully: values affect which credences it’s rational to start with in a given epistemicsituation. I do not mean to suggest any commitment to doxastic voluntarism.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 17

doxastic state in the first place can depend on your values. We now look at howthis might work.

3.6. Two-Tier Conception of Evidence. To see how agents may end up with aparticular doxastic state, it’s natural to adopt a common two-tier conceptionof evidence. To see how this works, let’s return to our Mystery Coin examplefrom earlier. Let Heads be the proposition that the coin Marisol is holding willland heads on the next flip, and let E be the proposition that the chance ofHeads is between .05 and .95.

Epistemologists may disagree about what Marisol should think about Heads.Perhaps she’s obligated to have credence exactly .5. Perhaps she can rationallyadopt any credence between .05 and .95. Or perhaps she should end up in someimprecise credal state.

The important point now is just that, regardless of what she ultimately oughtto do, we can think of Marisol’s selection of a doxastic state in the followingway: First, E eliminates from contention all probability functions that assigncredence less than .05 or greater than .95 to H. Second, Marisol selects hercredal state (precise or imprecise) from the functions remaining.24

It may seem that this thesis requires us to deny unique, objective bayesianism.That is, it seems to fit naturally with:

Non-Uniqueness: For some bodies of evidence E, there is sometimes noprecise credence function that uniquely responds in the objectively mostepistemically rational way to E.

However, even the most austere objective bayesians have adopted a two-tierconception of evidence (Jaynes 1973; Williamson 2010). On their view, the firststage may leave a variety of options on the table, but the second stage alwaysnarrows the set of rational choices to exactly one. So, although the two-tierconception may not be compatible with every view of how evidence works, it’sa big tent.

3.7. Selection Rules. Assuming the two-tier conception is right, after the firststage, evidence can leave us with a bunch of probability functions still in con-tention. On our example above, Marisol has to select from the probability func-tions that assign anything between .05 and .95 to Heads.

We now examine the process by which she chooses. Let’s begin with twotemporary simplifying assumptions. First, we’ll assume Marisol will end up ina precise credal state. Second, we’ll assume that Marisol uses the Brier Score tomeasure inaccuracy.

It may seem that if we’re forced into a precise view, Marisol ought to end upwith credence :5 in heads. After all, she doesn’t have any evidence that favoursor disfavours Heads over :Heads. But that move is a little too quick. SupposingMarisol adopts a credence of :5 in Heads, she’s sure to have a Brier-inaccuracyof :25: if Heads is true, she gets a score of �1� :5�2, and if Heads is false, shegets a score of :52.

On the other hand, if she adopts a credence of :4, she has the potential for abetter score. If H is false, she’d end up with a score of :42 � :16. Of course, she

24The first stage of this process may not impose any constraints. If an agent simply has noevidence whatever concerning some proposition, she can select her credence from the entireinterval.

18 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

risks a worse score of �1� :4�2 � :36. So, while adopting a credence of :5 playsit safe and minimises the loss in the worst-case scenario, it also maximises theloss in the best-case scenario.

Her ultimate credence therefore depends on her appetite for epistemic risk.This appetite is partly reflected in the selection rule she uses. That is, it’s reflectedin the policy she uses to choose a credence from those allowed by the evidence.

Although there are infinitely many potential selection rules, let’s take a mo-ment to review three.25 Let O be the set of available credences after the firststage of the evidential process. Marisol can pick a credence in O via:

MiniMax: Select the credence that has the best worst-case outcome. I.e.,

arg minx2O

maxi�0;1

BS�x; i�

MiniMin: Select a credence that has the best best-case outcome: I.e.,

arg minx2O

mini�0;1

BS�x; i�

Hurwicz�: Select the credence that has the best weighted average of thebest- and worst-case outcomes, with weights given by � 2 �0;1�. I.e.,

arg minx2O

�maxi�0;1

BS�x; i�� �1� ��mini�0;1

BS�x; i�!

Each of these rules reflects a different epistemic risk-management policy.MiniMax heeds the commandment to Shun error! It looks at the options andchooses a credence based only on the maximum possible inaccuracy.26 In thecase of Mystery Coin, it recommends credence of :5 under the Brier Score.MiniMin goes the opposite direction and zealously seeks to Believe truth! Inthis case, it recommends either a credence of :05 or :95. Hurwicz� seeks abalance between the two great commandments.27 Depending on the value of �,Hurwicz� can end up recommending any credence within the interval �:05; :95�.

Now, although MiniMin is to extreme to be a rational policy,various versionsof Hurwicz� are arguably rational. The risk-aversion of MiniMax leads to po-tentially undue scepticism, since it always requires the most agnostic credencethat remains among any set of options. Given our ultimate goals of findinga plausible place for imprecise credences in accuracy-first epistemology, I’llassume that no single selection rule is mandatory.

However, even if a single rule is privileged, the same selection rule can lead todifferent credences depending on the scoring rule. As we saw above, differentscoring rules disagree about which credence function is more accurate thanwhich others at various worlds. Therefore, rules that appeal to best- and worse-case outcomes will yield different results depending on the utility function.

3.8. Epistemic Value and Imprecise Credence. It should now be clear howimprecise epistemic values—even purely alethic ones—can lead to imprecisecredences. Suppose an agent only cares about having accurate credences. How-ever, at least one of the following two claims is true of her:

25For simplicity, we only look at these selection rules as applied to individual credences, notentire credence functions.26See (Pettigrew 2014) for a discussion of this rule in epistemology.27See (Konek 2015; Pettigrew 2015) for further discussion.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 19

(1) Her values don’t single out a single selection rule. For instance, hervalues may not decide between various versions of Hurwicz� for � in,say, �:5; :8�.

(2) Her values don’t single out a single scoring rule. Although she strictlyprefers higher credences in truths and lower credences in falsehoods,facts about her preferences don’t single out one scoring rule determi-nately.

If either holds, there won’t always be a fact of the matter about which credencefunction is hers. All credence functions compatible with her epistemic valueshave equal claim.

Note that our supervaluationist interpretation of imprecise credences is keyhere. Because the agent’s values are not fully determinate, there is no way to pindown a precise scoring rule and a precise selection rule that combine to reflecther values in a uniquely best way. So, various scoring rules paired with variousselection rules can be equally reasonable precisifications of her epistemic values.In turn, various credence functions can be equally good precisifications of herdoxastic state.

3.8.1. Scoring Imprecise Credences. We began with a puzzle that suggested that,from an accuracy-first perspective, imprecise credences were at best otiose. Inour simplified version of the argument, a credal state of �:05; :95� would haveended up no more or less accurate than some particular precise credence, mostlikely .5. So, there could be no reason to prefer the imprecise credal state to theprecise one, or vice versa.

We’ve now seen that there is instead a more quietistic response availablethat’s natural on our supervaluationist picture. Suppose Alice’s representorassigns �:05; :95� to some proposition X. On our view, it’s not the case that Alicethinks that �:05; :95� is less inaccurate than :5. In fact, she simply has no deter-minate opinion about how the interval stacks up against :5 at all. Comparingthe two is simply a category mistake.

Instead, her views about the inaccuracy of any credence x in the interval�:05; :95� and :5 are indeterminate. On some precisifications of her credal state,she expects :23 to be more accurate than :5. On others, she expects it to be lessinaccurate. Each x in �:05; :95� is, on some precisification of her doxastic state,one that minimises expected inaccuracy by her lights.

Since her credal state is indeterminate—there’s simply no fact of the matterwhich element of [.05, .95] is really Alice’s credence toward x—her inaccuracyis likewise indeterminate. Put slightly differently: because it’s indeterminatewhether :23 or :5 is the credence that she thinks does best at optimising herepistemic values and goals, it’s indeterminate whether :23 or :5 really is hercredence.

Now, one might object that if it’s indeterminate which x in �:05; :95� is reallyAlice’s credence, it’s also indeterminate whether she prefers :05 � � to :5 forsmall enough �. After all, according to the precisification of her doxastic statethat assigns :05 to X, :0499 is expected to be less inaccurate than :5. That’s true,but it misses the point. On no precisification of her epistemic values is :0499 amaximally good credence to have.

We were also challenged to explain why imprecise credences didn’t lead topermitted violations of the Principal Principle. The answer is straightforward on

20 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

our interpretation of what imprecise credences amount to. If Alice learns thatthe chance of Heads is x, it’s determinate that obeying the Principal Principlemaximises objective expected epistemic utility according to every strictly properscoring rule (Pettigrew 2013). So, if Alice is rational, then according to everyprecisification of her values, she ends up obeying the Principal Principle. Thatis, even though her epistemic standards and values will sometimes disagreewith one another in certain evidential situations, they should all agree in caseswhere the chances are known.

One might still raise a worry along the following lines. Suppose that Aliceand Bob both know the chance of Heads is :5. Alice adopts credence :5, whileBob ends up in an imprecise credal state of �:05; :95�. Carol learns the chance ofHeads as well, and she is deciding whether she should end up like Alice or Bob.Even though Bob’s credal state is indeterminate, Carol can—let us suppose—take a brain scan of Bob and of Alice and decide whether to switch her brainstate to match either of theirs. Since the brain is just a physical object, the brainstate is in fact determinate. So, in this way, Carol can decide which brain stateshe’d prefer to be in. Since there’s no fact of the matter whether Alice or Bob ismore accurate, why should Carol prefer to be like Alice instead of like Bob?

The answer is that it’s not agents or brains that are directly evaluated foraccuracy, strictly speaking. Instead, accuracy is what gives value to doxasticstates. Bob, although in a determinate brain state, has no determinate credalstate. From a purely epistemic point of view, Carol shouldn’t judge directlywhether she’d prefer to be Alice or Bob. Instead, she can judge whether variousprecisifications of Bob’s credal state are in accord with her epistemic values. Ifshe’s rational, she’ll balk at all those that don’t assign credence :5 to Heads. It’sthe mental states themselves—not the agents—that bear epistemic value.

4. Comparison to Alternatives

We’ve so far seen that we can generate imprecise credences from imprecisevalues, and we’ve seen how imprecise credences are in fact compatible withaccuracy-first epistemology. Before concluding, it’s worth briefly comparing theview we’ve developed with some alternatives.

4.1. Departure from Orthodoxy. We earlier noted that our Indeterminateinterpretation fit with the orthodox view that facts about an imprecise agent’sdoxastic state are those that each element of her representor agrees about. Forinstance, if for every c in Alice’s representor, c�X� > c�Y�, then Alice is moreconfident in X than in Y .

However, there’s an important way in which we now depart from the ortho-doxy. On the standard view, if Alice’s representor at t0 isR and she receives newevidence E, then she should update by pointwise conditionalisation.28 That is,her new imprecise state at t1 should be represented byRE � fc��jE� : c 2 Rg.29

What’s the justification for this updating rule? We saw above, briefly, thatin the case of precise bayesianism, conditionalisation can be justified throughappeal to the standard decision-theoretic norm of expected utility maximisation.

28To be less contentious, we might say she should plan to update by conditionalisation, sincesome might object to any diachronic updating norms. This wrinkle need not concern us here.29For another departure from the orthodox view, see (Weatherson 2007).

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 21

Greaves and Wallace prove, roughly, that on any strictly proper scoring ruleupdating by conditionalisation minimises expected inaccuracy.

The same cannot be said, however, for point-wise conditionalisation in theimprecise case. On our Indeterminate view, Alice is not, strictly speaking, incredal state R or RE at any point. So, there’s no way RE itself could be a statethat minimises expected inaccuracy.

Instead, we have to be a bit more subtle. According to every c in Alice’s rep-resentorR, the credence function that minimises expected inaccuracy is c��jE�.However, no function in particular need determinately minimise expected inac-curacy. If c0 is also inR, then c0 thinks c0��jE� is best, which of course may notbe equal to c��jE�. So, it’s indeterminate whether Alice thinks c��jE� would bebest.

Now suppose that, after learning E, Alice’s representor at t1 is R0 �RE . Didshe do anything wrong? Not necessarily. Suppose c is in R, but c��jE� isn’t inR0. Bad move, according to c. However, so long as there’s some c0 2 R suchthat c0��jE� is in R0, it’s not determinately true that Alice failed to minimiseexpected inaccuracy. Therefore, it’s not determinately true that she did anythingirrational. In other words, there’s some precisification of her doxastic state att0 and at t1 that makes Alice an expected inaccuracy-minimiser.

One might try to object that Alice should determinately be an expected in-accuracy minimiser. After all, minimisation of expected disutility is the deter-minately rational thing to do. However, if Alice starts in R and ends up in RE ,then it’s indeterminate whether she’s in c or c0 at t0 and indeterminate whethershe’s in c��jE� or c0��jE� at t1. In turn, it’s indeterminate whether she reallyminimised expected inaccuracy.

I take this heretical view—that Alice need not update by pointwise condi-tionalisation—to be a welcome departure from the orthodoxy. It means that ifAlice begins epistemic life with some set R, she isn’t stuck with all the descen-dants of R forever after. If Alice changes or precisifies her epistemic values,her epistemic behaviour varies in a natural way. As we saw above, in precisebayesianism, once you have a credence function, you’re more or less stuck withupdating it via conditionalisation forevermore. There’s no opportunity for val-ues to affect how you learn. On this style of imprecise bayesianism, Alice hasthe option to let a change in values influence her learning behaviour withoutdeterminately violating the norm to maximise expected utility.

4.2. Imprecise Credences and Permissive Bayesianism. One popular alterna-tive to imprecise bayesianism is permissivism. Permissivists agree with defend-ers of imprecise credences that sometimes evidence doesn’t single out a uniqueprecise credence function as the maximally rational option. That is, Permis-sivists and ICers agree with Non-Uniqueness above.

Permissivists and ICers disagree, however, about what’s rational to do inthose situations. Unlike ICers, permissivists—or at least the species of themcurrently under discussion—think an agent is required pick a single precisecredence function from the set of rational options.

Is there a reason to favour imprecise credences over permissive ones? I thinkso, for two reasons. First, permissivists require agents to end up with opinionsthat, by everyone’s lights, must go beyond what the evidence objectively sup-ports. They require agents to decide on a single credence function to adopt even

22 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

though the evidence itself doesn’t privilege that credence function above theothers. Perhaps such privileging is rationally licit, but it should not be manda-tory. Imprecise credences rightly allow agents to remain undecided betweenalternative credence functions.

Second, once agents adopt a single precise credence function, they are com-mitted to a variety of firm opinions. In situations in which multiple credencefunctions are rationally on a par, some of these opinions are not supported bythe evidence. For instance, suppose Alice and Bob share evidence that makesany credence in �:2; :3� toward Heads maximally rational. Alice ends up withcredence :24, and Bob ends up with credence :29 based on their respective epis-temic values. Both Alice and Bob recognise that they chose their credences froma set of options that were rationally on a par. After they end up in a determi-nate credal state, they must nonetheless think that that their own credence isuniquely best on every measure of epistemic value. Since whatever scoring rulethey use is strictly proper, Alice thinks that :24 maximises expected epistemicutility, whereas Bob thinks :29 does.

That wouldn’t be so bad if Alice and Bob could recognise that this appar-ent disagreement was due to different utility functions. If I like chocolate icecream and you like kale, we can recognise that we’re pursuing optimal snack-value given the difference in our tastes. In particular, I can say that given yourpreferences, you should expect to do best if you eat kale.30

However, Alice expects her credence function to be more accurate than Bob’son every strictly proper rule. That is, she expects that on any measure acceptableto the epistemic utility theorist, her credence will come out less inaccurate. Atthe same time, she recognises that the evidence itself provides no reason toform this opinion over Bob’s view that his credence function is more accurate.So, (i) she realises that had she had a different epistemic utility function beforeshe chose .24, she would have ended up with .29, (ii) she currently thinks thateven on that alternative epistemic utility function, .24 is better than .29, and(iii) the evidence itself doesn’t support the claim that .24 is more accurate than.29 over the claim that .29 is more accurate than .24.

Again, that may be all right if Alice were merely permitted to form opinionsthat went beyond the evidence—i.e., to adopt opinions that the evidence doesnot uniquely support. The problem is that permissivism mandates she formsuch unsupported opinions. So long as she ends up in some precise doxasticstate or other, she must expect that her own credence function is the leastinaccurate on every strictly proper measure.

Imprecise credences, on our picture, allow for a bit more modesty. SupposeRA is Alice’s representor. She doesn’t expect any particular credence to be theleast inaccurate. If Alice has both :2 and :25 in her representor, then there’sno fact of the matter which she expects to do better. Since the evidence, bystipulation, doesn’t objectively support any credence in the interval over anyother, this seems like a superior response. She is not required to form opinionsthat exceed the evidence.

30See (Horowitz 2015) for more on this issue.

IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES 23

5. Conclusion

We noted at the outset that imprecise credences can look attractive from anevidential perspective, but they also appear incompatible with accuracy-firstepistemology. Appearances are deceiving, however. Different ways of valuingthe truth—e.g., different scoring and selection rules—lead to different credenceswhen evidence is unspecific. If agents have indeterminate values, they’ll in turnhave indeterminate credences. Imprecise credences thus do not conflict withaccuracy-first epistemology but instead naturally emerge from it.

References

Carnap, R. (1950). Logical Foundations of Probability. Chicago: University ofChicago Press.

de Finetti, B. (1964). Foresight: Its Logical Laws, Its Subjective Sources. Wiley.Easwaran, K. (2013). Expected accuracy supports conditionalization – and con-

gomerability and reflection. Philosophy of Science 80(1), 119–142.Goldman, A. (1986). Epistemology and Cognition. Harvard Univeristy Press.Greaves, H. and D. Wallace (2006). Justifying conditionalization: Conditional-

ization maximizes expected epistemic utility. Mind 115(632), 607–632.Hájek, A. (2003). What conditional probability could not be. Synthese 137 (3),

273–323.Horowitz, S. (2015). Epistemic value and the ‘Jamesian goals’. Unpublished

manuscript.James, W. (1896). The Will to Believe and Other Essays in Popular Philosophy.

Longmans, Green & Company.Jaynes, E. (1973). The well-posed problem. Foundations of Physics 3, 477–493.Jaynes, E. (2003). Probability Theory: The Logic of Science. Cambridge: Cam-

bridge University Press.Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of

Science 65, 575–603.Joyce, J. M. (2005). How probabilities reflect evidence. Philosophical Perspec-

tives 19, 153–178.Joyce, J. M. (2009). Accuracy and coherence: Prospects for an alethic epistemol-

ogy of partial belief. In F. Huber and C. Schmidt-Petri (Eds.), Degrees of Belief,Volume 342, pp. 263–297. Springer.

Joyce, J. M. (2010). A defense of imprecise credences in inference and decisionmaking. Philosophical Perspectives 24, 281–323.

Konek, J. (2015). Epistemic conservativity and imprecise credence. Philosophyand Phenomenological Research.

Konek, J. and B. A. Levinstein (2017). The foundations of epistemic decisiontheory. Mind . Forthcoming.

Leitgeb, H. and R. Pettigrew (2010). An objective justification of bayesianism I:Measuring inaccuracy. Philosophy of Science 77, 201–235.

Levi, I. (1985). Imprecision and indeterminacy in probability judgment. Philoso-phy of Science 52(3), 390–409.

Levinstein, B. A. (2012). Leitgeb and Pettigrew on accuracy and updating. Phi-losophy of Science 79(3), 413–424.

Levinstein, B. A. (2015, March). With all due respect: The macro-epistemologyof disagreement. Philosohers’ Imprint 15(13), 1–20.

24 IMPRECISE EPISTEMIC VALUES AND IMPRECISE CREDENCES

Levinstein, B. A. (2017). A pragmatist’s guide to epistemic utility. Philosophy ofScience.

Mayo-Wilson, C. and G. Wheeler (2015). Scoring imprecise credences: A mildlyimmodest proposal. Philosophy and Phenomenological Research.

Moss, S. (2011). Scoring rules and epistemic compromise. Mind 120(480), 1053–1069.

Pettigrew, R. (2013). A new epistemic utility argument for the principal principle.Episteme 10(1), 19–35.

Pettigrew, R. (2014, Online first). Accuracy, risk, and the principle of indifference.Philosophy and Phenomenological Research.

Pettigrew, R. (2015). Jamesian epistemology formalised: An explication of ‘Thewill to believe’. Episteme. Forthcoming.

Pettigrew, R. (2016). Accuracy and the Laws of Credence. Oxford UniversityPress.

Predd, J., R. Seiringer, E. H. Lieb, D. Osherson, V. Poor, and S. Kulkarni (2009).Probabilistic coherence and proper scoring rules. IEEE Transactions on Infor-mation Theory 55(10), 4786–4792.

Quine, W. (1986). Reply to Morton White. In L. Hahn and P. Schilpp (Eds.), ThePhilosophy of W.V. Quine, Volume XVIII of The Library of Living Philosophers,pp. 663–665. Open Court Publishing Company.

Rinard, S. (2015, February). A decision theory for imprecise probabilities. Philoso-hers’ Imprint 15(7), 1–16.

Rosenkrantz, R. (1981). Foundations and Applications of Inductive Probability.Ridgeview Press.

Schoenfield, M. (2015). The accuracy and rationality of imprecise credences.Noûs 00(00), 1–19.

Seidenfeld, T., M. J. Schervish, and J. Kadane (2012). Forecasting with impreciseprobabilities. International Journal of Approximate Reasoning 53, 1248–1261.

Selten, R. (1998). Axiomatic characterization of the quadratic scoring rule.Experimental Economics 1, 43–62.

Solomonoff, R. J. (1964a). A formal theory of inductive inference, part 1. Infor-mation and Control 7 (1), 1–22.

Solomonoff, R. J. (1964b). A formal theory of inductive inference, part 2. Infor-mation and Control 7 (2), 224–254.

van Fraassen, B. (1990). Figures in a probability landscape. In M. Dunn andA. Gupta (Eds.), Truth or Consequences, pp. 345–356. Kluwer.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities, Volume 42of Monographs on Statistics and Applied Probability. Chapman and Hall.

Weatherson, B. (2007). The bayesian and the dogmatist. Proceedings of theAristotelian Society 107 (2), 169–185.

Williamson, J. (2010). In Defence of Objective Bayesianism. Oxford UniversityPress.