Against mushy credence [part 2]

In this post I’ll look at some cases which might motivate a mushy credence view. Which particular mushy credence we use to account for any particular case will depend on the details of the interpretation we place on mushiness – on which kind of indeterminacy, in particular, we take to be represented by the provision of a set of credence functions rather than a single function. For a survey of the different possible interpretations of set-valued credal states, see Bradley [Synthese, 2009]. In part 3 of this post I’ll consider these interpretations in more detail, and raise some problems for the use of sets of credence functions based on the sheer multiplicity of suggested interpretations. But let me anticipate the results of that discussion by restricting my attention in this post to what Bradley calls the ignorance interpretation of mushiness, and seeing how that interpretation is most naturally applied to our problem cases.

Here’s what Bradley says about the ignorance interpretation:

“The agent may be unable to arrive at a judgement because she lacks the informational basis for doing so. This seems to be the kind of situation in which subjects find themselves, for instance, when placed in an Ellsberg paradox set-up in which the consequences of their decisions depend on the colour of a ball drawn from an urn containing an unknown proportion of balls of different colours. Many authors argue that in these kinds of situation the agent is not merely in a state of uncertainty in the sense that they don’t know for sure which colour ball will be drawn but can assign a probability to the prospect of each colour, but are rather in a state of ignorance in the sense that, such are the limits on what they know and can find out, that they have no non-arbitrary basis for assigning such a probability.”

So how can we apply this to the first of our cases?

[Percy] Someone you’ve never met emails to tells you that they use the name ‘Percy’ for some particular proposition; they don’t tell you anything about what Percy says. What is your credence that Percy is contingent? What is your credence that Percy is true?

Here the relevant ignorance is ignorance of which proposition Percy is. Therefore, applying the ignorance interpretation involves assigning a different credence function for each candidate for Percy’s identity. In particular, each candidate for Percy’s identity carries with it a credence distribution over contingency and non-contingency. For example, we are very confident indeed that ‘there are no round squares’ is noncontingent, while being equally confident that ‘there are no round windows in the White House’ is contingent. So it would seem that if we are to use a set of precise credences to represent our belief state concerning Percy’s contingency, then at least some of the credences in the set should be 1 or close to 1, and others 0 or close to 0. Perhaps there are some sentences whose status as contingent is controversial. ‘Other times exist’, ‘energy is conserved’, ‘ghosts exist’ might all fall into this category, and (if so) would contribute middling credences to the set. By aggregating the credences associated with all candidates of the relevant sort, we might hope to end up with a set spanning the [1,0] interval, or at least a set which comes very close to spanning that interval.

What about my state of belief in Percy’s truth? Again, different Percy-candidates come with a different credence distribution over truth and falsity. If Percy is ‘1=1’, then we might assign credence 1 to it; if Percy is 1=2, we might assign credence 0. And there are plenty of propositions to which we assign middling credence. So again, our state of belief in Percy’s truth will plausibly be represented by a set of credence functions spanning the [1,0] interval.

The case of Percy is relatively suitable for mushy treatment, as it’s plausible that there do exist continuum many propositions; this makes it at least a mathematical possibility that for any real number N between 0 and 1 we could find a proposition such that our credence in that proposition’s contingency or truth is N. But notice that even this is not guaranteed; it seems unlikely to be a requirement of rationality, for example, that an agent should have a credence in some proposition’s contingency equal to N, for any N in any range between 0 and 1. That is, it doesn’t seem out of the question that for no proposition whatsoever do I have a credence of 0.55551 in that proposition’s being contingent. But if so, then my state of belief in Percy’s contingency cannot be represented by an ignorance-interpreted mushy credence over an interval including 0.55551; the set which represents my state of belief in Percy’s  contingency would have to be ‘gappy’.

Another worry emerges when we notice that sets spanning the [1,0] interval are being used to represent our state of belief both in Percy’s contingency and in Percy’s truth. It follows straight away from this that our state of belief that Percy is contingently true must also be represented by a set spanning the [1,0] interval. This issue will be revisited later on.

[Quiz] In a tie-breaker round of the pub quiz you are asked how many times Australia have won the Ashes. You never watch sports or read the sports pages, and don’t know what the Ashes are or how often they are competed for. What is your credence that the number is larger than 10? What is your credence that the number is even?

In this case, the relevant ignorance concerns the properties of the Ashes competitions. ‘I don’t know enough about what the Ashes competitions are like’, the mushy credence lover might reason, ‘to assign a credence in their having been won any particular number of times by Australia’.  Suppose first that the Ashes are the prize in a daily card game between Australian and British airline crews, and that the card game is strictly a game of chance with a 50% likelihood of either side winning on any given occasion, and that they have been competed for on 10,000 occasions, and that neither side has ever cheated. Then the expectation value for Australian wins is 5,000; your credence that the number exceeds 10 is close to 1; and your credence that the number is even approaches 0.5. But now suppose that the Ashes refers to a single cricket game, played in 1882, and the rules were such that only one ball would be bowled, rendering it impossible for either side to win. Then the expectation value for Australian wins is 0, your credence that the number exceeds 10 is zero; and your credence that the number is even is close to 1.

It is easy to see that there are hypotheses about the rules and frequency of Ashes competitions which will result in any credence between 0 and 1 being ascribed to Australia winning more than 10 times, or to the number of wins being even. As in the case of [Percy], it seems that we must assign a set of credences spanning the [1,0] interval both to the proposition that the number is larger than 10, and to it being even.

Here a further question may occur to us. Surely some of the candidates for the nature of Ashes competition are less plausible than others; if the rules were as given in either of my toy examples, then the Ashes competitions would be highly unlikely to feature in a pub quiz, or even to have been held at all. With this in mind, it would be natural to weight some of the candidates more strongly than the others; but this is not part of the standard mushy credence machinery. Without weighting, the only way that we can restore some plausibility to the application of ignorance-interpreted mushy credences in this case is if there is some natural partition of possible rules-candidates, such that each is equi-probable. But such a partition looks like it would be highly problematic. I will return to this issue in part 3 of this post.

[Constant] Fundamental physics reveals that the value of a  certain ‘fundamental constant’ Q of nature is around 75. It turns out that a value of 74 or lower for Q would have resulted in a failure for stars to form; a value of 76 or higher would have resulted in a supergiant black hole sucking in the entire universe. Whether or not we take these results to be evidence for eg a benevolent God or multiple universes will depend on how antecedently unlikely we take a value of 75 for Q to be. What prior credence should we have had in Q being between 74 and 76?

In this example, it is less clear how to apply ignorance-interpreted mushy credences. My best stab at characterizing the ignorance involved is ignorance of the range of metaphysically possible worlds in which Q takes any value at all. If all metaphysically possible worlds that instantiate Q have a Q-value of around 75, then the appropriate prior credence to have in Q being around 75 is 1; if there are metaphysically possible worlds featuring all values of Q from 0 to 100, then the appropriate prior credence to have in Q being around 75 is about 0.01; if there are metaphysically possible worlds featuring all real-numbered Q-values, then our prior credence in Q being around 75 is 0. Once again, then, we are led to ascribe a set of credence functions covering the [1,0] interval as the correct representation of a rational belief state in Q being between 74 and 76.

[Cube] Bas van Fraassen locks you in a mystery cube factory. You discover that no cubes produced have edges longer than 2 metres.  You have no other evidence about the distribution of cube size. What is your credence that the next cube produced will have a volume larger than a cubic metre?

Here the relevant ignorance is ignorance of the distribution of cube size. One hypothesis has it that all cubes produced by the factory have edge lengths of 10cm. This would result in a credence of 0 in the next cube produced having a volume larger than a cubic metre. Similarly, the hypothesis that all cubes have edge lengths of 1.5m would result in a credence of 1 in the next cube having a volume larger than a cubic metre. And it’s easy to see that various hypotheses about edge length distribution could make appropriate any real-valued credence in the next cube having a volume larger than a cubic metre.  So once again, we find a set spanning the [1,0] interval mandated to represent our belief state that the next cube produced will have a volume larger than a cubic metre. Anyone spot a theme developing here?

[Ignorant tennis] You sit down to watch a tennis match. You have never heard of either player, they appear evenly-matched in fitness and physique. What is your credence that player A will win?

Our ignorance concerns the capacities of the two players. As part of this ignorance, we don’t want to rule out the hypothesis that player A has a heart pacemaker with a faulty battery which is about to expire, and which will force him to retire on medical grounds, handing victory to B. Nor do we want to rule out that player B is similarly afflicted. On the former hypothesis, your credence in A winning should be 0; on the latter, it should be 1. And there are plenty of intermediate hypotheses about the relative ability of the two players according to which the credence in A winning can take any value between zero and one.  So, once again, your credence that player A will win ought to be represented by a set spanning the [1,0] interval.

[Knowledgeable tennis] You sit down to watch a tennis match. You have coached both players, and have an detailed knowledge of their abilities and playing style. You consider them exactly evenly-matched. What is your credence that player A will win?

We presume that the ‘abilities’ referred to here includes all eventualities, such as faulty pacemakers. There is no ignorance involved in the case ex hypothesi – therefore, the correct credence in A’s winning is 0.5. The friend of mushy credence will presumably point to the difference between 0.5 and [0,1] as representing the intuitive difference between Ignorant tennis and Knowledgeable tennis.

From the above treatment, every single case which seems amenable to mushy treatment (under the ignorance interpretation of mushy credence) seems to land us with a set of credences spanning the [1,0] interval. This does not lead to any immediate contradiction, just as we can have credence 0.5 in many distinct propositions without contradiction. But it does lead to a number of worries, which I will list briefly here and examine in more detail in part 3 of this post:

  • Sets of credences spanning the [0,1] interval are immovable, assuming that we update by conditionalization of each function within the set. If we start with a set spanning [1,0] and conditionalize, no further evidence will shift us from this position.
  • Representing belief states with sets spanning [0,1] intervals seems to wash out relevant epistemic differences. For example, we are presumably more confident that Percy is true than that Percy is contingently true, as the latter claim is logically stronger. But it our belief state in Percy’s truth and our belief state in Percy’s contingent truth must both be represented by a set spanning the [0,1] interval.
  • Although the ignorance-interpreted mushy credence machinery has the scope to represent partial suspension of judgement (for example, by a set spanning [0.4-0.6]), it seems that in a range of straightforward cases (all those surveyed above, in any case) this partial suspension of judgement is inapplicable, and we are stuck with maximal suspension of judgement.
  • It is not clear that the sets resulting from an ignorance interpretation of mushiness will not be ‘gappy’ – that is, it is not clear that there will always be a function in the set corresponding to every real number in some interval. Gappiness significantly detracts from the intuitive appeal of mushy credence functions, and complicates the mathematics needed to apply them.
  • Applying the ignorance-interpreted mushy credence machinery in the way described above seems to wash out differences between the plausibility of the various hypotheses (the Ashes resting on the results of a normal cricket match vs a 1-ball cricket match). To avoid this, we would require either a) weighting of different credence functions within the set or b) a natural partition of hypotheses into equiprobable hypotheses. Applying either of these solutions would significantly undermine the simplicity and intuitive appeal of the mushy credence approach.

In part 3, I will reassess motivations for using mushy credences, and tie various loose threads into a concerted case against the use of mushy credences in a probabilist epistemology.

[To be continued]

Against mushy credence [part 2]

Against mushy credence [part 1]

Recently I’ve been puzzling over the ‘coin puzzle’ recently spotlighted by Roger White. White uses it to raise trouble for a view of personal probabilities (variously called ‘mushy credences’, ‘fuzzy credences’, ‘imprecise probabilities’, ‘vague probabilities’, ‘thick confidences’) which characteristically represents personal probability states using sets of credence functions, such that each of the functions in the set individually conforms to the probability calculus and updates by conditionalization. The mushy credence view is often motivated by the desire to account for the difference between judgements of equiprobability and suspension of judgement.

I think White’s coin puzzle does serious damage to the mushy credence view. Properly understood, it is a way of making vivid the phenomenon of dilation, which results when mushy credences interact with sharp credences. Dilation isn’t news to the proponents of mushy credences, but as the coin puzzle shows, in combination with objective chance it becomes problematic. A known correlation of some p in which we have mushy credence with the result of some coin toss for which we know the sharp chance of heads leads to a mushy credence in heads, even when we knew the pre-toss chance. This mushification of our credence in chancy outcomes is repugnant and provides us with a reason to reject one of the premises that led us to it. But the only controversial premise was the mushy credence view.

This is bad news for proponents of mushy credences. Some I have spoken to would bite the bullet and accept the counter-intuitive consequences of known chance dilation, seeking to soften the impact by emphasizing that the coin case is unrealistic. Others (here I am thinking of Scott Sturgeon’s forthcoming paper in OSE) take this as reason to abandon the sets-of-credence-functions-individually-updated model. Others, like White, take it as reason to abandon mushy credences altogether and explain suspension of judgement in a different sort of way. Which route is the most promising?

I suspect that biting the bullet would prove too painful, and that once we abandon the formal model there wouldn’t be much left of the mushy credences view; but I won’t defend these claims here. Rather, my plan is to support the case against mushy credences by examining some cases which might have seemed amenable to mushy treatment, and providing an alternative non-mushy explanation of what is going on in these cases. So here are some example cases to ponder over. In the next part of this post, I’ll apply the mushy treatment to these cases and ask how well it fares with them.

[Percy] Someone you’ve never met emails to tells you that they use the name ‘Percy’ for some particular proposition; they don’t tell you anything about what Percy says. What is your credence that Percy is contingent? What is your credence that Percy is true?

[Quiz] In a tie-breaker round of the pub quiz you are asked how many times Australia have won the Ashes. You never watch sports or read the sports pages, and don’t know what the Ashes are or how often they are competed for. What is your credence that the number is larger than 10? What is your credence that the number is even?

[Constant] Fundamental physics reveals that the value of a  certain ‘fundamental constant’ Q of nature is around 75. It turns out that a value of 74 or lower for Q would have resulted in a failure for stars to form; a value of 76 or higher would have resulted in a supergiant black hole sucking in the entire universe. Whether or not we take these results to be evidence for eg a benevolent God or multiple universes will depend on how antecedently unlikely we take a value of 75 for Q to be. What prior credence should we have had in Q being between 74 and 76?

[Cube] Bas van Fraassen locks you in a mystery cube factory. You discover that no cubes produced have edges longer than 2 metres.  You have no other evidence about the distribution of cube size. What is your credence that the next cube produced will have a volume larger than a cubic metre?

[Ignorant tennis] You sit down to watch a tennis match. You have never heard of either player, they appear evenly-matched in fitness and physique. What is your credence that player A will win?

[Knowledgeable tennis] You sit down to watch a tennis match. You have coached both players, and have an detailed knowledge of their abilities and playing style. You consider them exactly evenly-matched. What is your credence that player A will win?

(thanks to John Cusbert for this last pair of examples.)

[To be continued]

Against mushy credence [part 1]