Deterministic Chance

We had a thoroughly successful MLE seminar today on the subject of objective chance in deterministic worlds. Lewis influentially insisted that deterministic chance was simply incoherent – that the only objective chance in such worlds could be 1 or 0. This conclusion seems fairly intuitive, but it doesn’t give a satisfying account of the chanciness of the special sciences. Classical statistical mechanics, in particular, presupposes determinism at the lower level, but produces probabilistic predictions. It doesn’t feel right to say these chances are ‘merely’ epistemic, as Lewis does.

So there’s been lots of work in recent years to rehabilitate the idea of deterministic chance. Barry Loewer in particular has treated making sense of deterministic chance as a precondition of making sense of chance. Schaffer has recently defended the Lewisian line, and his paper was the one under discussion.

One worry I initially had was that Schaffer’s presupposition that information about the laws of nature is admissible is incompatible with the Humeanism about laws he advocates. This worry ends up just being equivalent to the problem of undermining futures which led Lewis to the ‘new principal principle’. Although I think this remains a decisive argument against Humeanism, it’s not relevant to the main aims of Schaffer’s paper, so I’ll say no more about it.

There were some worries about how far the ‘platitudes’ about objective chance (which Schaffer appeals to in arguing that the best chance-candidates in deterministic worlds are 0’s and 1’s) are really platitudinous. We ended up satisfied that FP is platitudinous, but unconvinced by CTC – in fact, CTC seems false, as the following example suggested by John indicates:

I kill lots of Napoleon’s soldiers while they’re making their way to Waterloo. Wellington charges, and overwhelm’s Napoleon’s forces. My actions altered the chance of Wellington’s victory, but the actions were not temporally located between the cause (Wellington’s charge) and the effect (Wellington’s victory), as the CTC requires.

EDIT: this misunderstands either the CTC or John’s example. See Schaffer’s comment below.

Frank objected to the ‘Big Bang’ argument against the compatibility of IR and initial deterministic chances, on the grounds that in a Big Bang cosmology there is no first instant – time has the structure of an open set. We wondered, inconclusively, whether we could take a limit instead. EDIT: But as Schaffer points out in the comments, those who don’t believe in a first instant won’t be able to appeal to initial deterministic chance anyway.

Now to the main issue I want to discuss. Can the ‘objectively informed but still epistemic’ chances which Schaffer discusses count as objective chances? Lets consider three kinds of these ‘objective epistemic chances’ – chances in a poker game, chances in classical statistical mechanics, and chances in Bohm’s version of quantum mechanics.

In the poker game, what the next card will be is fixed from the start by the way the deck is shuffled. But this doesn’t mean that there aren’t correct and interesting probabilities that an experienced player can calculate and use to his advantage. These probabilities presuppose ignorance of the order of cards in the deck, but that ignorance is part and parcel of the game of poker. Playing within the rules, the poker-chances play the role of objective chances; it’s only when we go beyond the game, and ask for information inadmissible according to the rules (the actual order of cards in the deck) that the poker-chances are trumped by the underlying deterministic mechanism.

Now consider classical statistical mechanics. Here, the future evolution of a system is fixed by its microstate, but we typically know only its macrostate. While we are ignorant of the microstate a system is in, the CSM statistical chances play the role of objective chances, but were we to be informed of the exact microstate the chances would become superfluous – we could use the deterministic mechanism to work out the future evolution of the system with certainty.

Similarly, in the Bohmian case, the actual future is fixed by the precise positions of the Bohmian corpuscles. But (assuming an equilibrium distribution of these corpuscles) it’s impossible for us to measure these precise positions. The information is inaccessible to all intents and purposes, so the Bohm-chances play the role of objective chances. Unlike poker, it’s physically impossible to obtain the information which would trump the Bohm-chances.

This points to a notion of admissible information which is, roughly speaking, relative to the rules of the game. In poker, the rules make the information about the order of cards in the deck inadmissible; finding out the order would allow us to dispense with the poker-chances, but amounts to cheating. In CSM, the in-practice impossibility of finding out the exact microstate of a system would allow us to predict its evolution with certainty. In Bohm theory, finding out the exact position of the particles would allow us to dispense with the Bohm-chances, but this is physically impossible.

Looked at this way, Lewis’ and Schaffer’s inability to accept deterministic chance arises from a fixed criterion of admissibility. But sticking to absolute admissibility seems unmotivated. The original account of admissibility given by Lewis was, by his own admission, not a rigorous one; but he allowed all historical information to be admissible (except in pathological cases, such as cyclical time). This immediately gives the game away; if historical information is admissible, so is information about the deck of cards when playing poker, so is information about the microstate when doing CSM, and so is (physically inaccessible) information about the exact position of corpuscles when doing Bohmian mechanics. So where is Lewis’ argument that historical information is always admissible? I don’t think there is one – he offers it as a proposal. However, this proposal makes it impossible to think of poker-chances, CSM-chances, and Bohm-chances as genuine chances; so there is good reason to reject his proposal.

Schaffer’s argument against deterministic chance goes via the six platitudes. The kind of deterministic chances we get out of relativizing the admissibility relation are what he calls ‘deterministic macro-posterior chances’; he claims that such chances cannot validate the ‘principal principle’, the ‘realization principle’, or the ‘lawful magnitude principle’. I’ll take these in turn.

The principal principle connects credence with chance. Schaffer envisages someone who knows that (for example) the CSM-chance of an outcome is 1/2, but also knows the exact microstate of the universe and hence knows that the ‘newtonian chance’ of the outcome is 0. Obviously, such a person should set her credence by the newtonian chance, and not by the CSM chance. But the natural explanation of this is not that CSM chances are not genuine chances, but that they can be trumped by knowledge of lower-level chances; these lower-level chances are inadmissable relative to CSM. In cases where there is no information inadmissible relative to CSM available to the agent, the CSM chances do play the correct role in the principal principle. So this objection fails once we relativize admissibility.

The realization principle says that, if the chance of an event at time t at world w is non-zero, there are worlds which match w perfectly up to t, and which share its laws, in which the event occurs. Schaffer argues that a believer in deterministic macro-posterior chance will be committed via the RP to worlds existing which are ruled out by the deterministic micro-laws. The response here for a believer in macro-posterior chance is to deny that the correct version of RP involves a perfect match up to t. He should instead say that the correct version of RP involves only a perfect match as regards all admissible information up to t. This principle reduces to Schaffer’s version if all historical information is admissible; but if only some such information is admissible then the new RP no longer poses any problem for deterministic macro-posterior chances.

A similar move rescues deterministic macro-posterior chance from the conflict Schaffer adduces with the Lawful Magnitude Principle. This says that if the chance of event e at time t at world w is x, then the laws of w entail that if the occurrent history up to t is H, then the chance of event e at time t at world w is x. This is just to say that chances are lawfully projected magnitudes. Schaffer argues that CSM chances will not be projected by the underlying deterministic laws. This is quite right – but the underlying deterministic laws are not the right ones to consider. The relevant laws are the CSM laws, which do project CSM chances. Similarly, the history which appears in the history-to-chance conditional should be a macro-history, not a micro-history, or we bring in information inadmissible by the lights of CSM.

The upshot of all this is that relativizing admissibility avoids the three objections Schaffer has to deterministic macro-posterior chances. His objections boil down to the single objection, that macro-chances can be trumped by knowledge of micro-chances – but if we relativise admissibility, this is no surprise, since micro-chances are inadmissible relative to the theory which produces macro-chances.

So we have two options – accept relativized admissibility, and allow both macro-chances and micro-chances to count as objective chance. Then the same world can contain chances of just 0 and 1 at some levels, as well as non-trivial chances at other levels. Or we stick with absolute admissibility and are forced to say that in deterministic worlds there are only trivial chances.

One interesting upshot of relative admissibility is that we can have chances of 0 and 1 in indeterministic worlds. Suppose, as Bohmians sometimes do, that there is an indeterministic micro-micro-dynamics underlying the deterministic Bohmian mechanics; there could then be non-trivial chances at the fundamental level, only trivial chances at the level of the corpuscle motions, and non-trivial chances again at the level of observable phenomena. This kind of picture should actually fit nicely with Schaffer’s denial that there has to be a fundamental level; but I have to think more about this.

Comments more than welcome!

(I should note that the proposal to relativize admissibility has a lot in common with a proposal of Luke Glynn’s, which (following Hajek) takes chances to be fundamentally conditional on histories (where histories could be either fundamental histories or special-scientific histories). I’m not yet sure whether the two proposals are equivalent, but they are in a very similar spirit.)

Advertisements
Deterministic Chance

5 thoughts on “Deterministic Chance

  1. I think that this is a very clear and useful summary of the issues and the primary objections to Schaffer’s view.

    I am very sympathetic to your proposal that we should relativize admissibility. As you point out, this proposal is in a similar spirit to my claim that there are non-trivial chances relative to (or conditional upon) special scientific histories. I think that probably the proposals are going to be equivalent. I’ve now put a rough draft of my paper on my website:

    Like you, I think that Schaffer has succeeded in showing only that there are no non-trivial chances relative to the full fundamental history of a fundamentally deterministic world. He has not shown that there do not exist non-trivial chances relative to non-fundamental histories.

    As you rightly point out, and as I argue in my paper, when the PP and RP are formulated so as to take into account the relativity of chance (or the relativity of admissibility), they are fulfilled by probability functions that output non-trivial values even in deterministic worlds.

    But why should we reformulate the PP and RP? Shouldn’t we just insist that chance isn’t really relative in the relevant sense: one only gets genuine chances by conditioning upon the fundamental history of the world?

    Even if one accepted this, it would still be true that there exists an interesting objective probability measure that fulfils the reformulated PP and RP. If one wants, one can deny that this is chance and call it chance*. I don’t strongly object to this line, but I do object a bit. [One might give the following reason for saying chance* is chance: When an ordinary English-speaker is asked, she’ll tell you that the chance that a coin lands heads is 1/2, and that isn’t because she’s already taken a position on the issue of whether the world is fundamentally deterministic. Indeed, one might even say that it’s a platitude that the chance of a fair coin landing heads is 1/2, and that this is much more obviously a platitude than the super-subtle PP and RP! (this is like the ‘Paradigm Case Argument’ considered and all too swiftly rejected by Schaffer)]

    I don’t think I really subscribe to the line taken in the square brackets: I don’t know to what extent ordinary English speakers distinguish objective and subjective chances and I don’t really think it matters that much. I believe that we need a concept of objective chance because science tells us that there are such things. In particular, scientific laws project probabilities. These probabilities are objective because the laws are objective. Moreover, even in deterministic worlds there are probability-projecting special scientific laws (these are genuine laws: they support counterfactuals, play the right roles in explanation and prediction, are confirmed by their instances, etc.) Therefore there are objective chances in deterministic worlds.

    In other words, chance* is chance. Chance* is chance because the most important platitude about the scientific concept of objective chance is that chances ≡ lawfully projected probabilities (it is because we have probabilistic laws that scientists and philosophers of science use the concept of objective chance, it is because there are probabilistic laws that we have reason to believe there are objective chances). Chances* are probabilities projected by special scientific laws. Therefore chances* are chances.

  2. Jonathan Schaffer says:

    Hi Alastair and Luke,
    Sorry I couldn’t be there for the seminar! It would have been fun to have taken part in the discussion. Lots of interesting issues here… I’ll try to reply in order:

    1. On the Causal Transition Constraint, you say “CTC seems false, as the following example suggested by John indicates: I kill lots of Napoleon’s soldiers while they’re making their way to Waterloo. Wellington charges, and overwhelm’s Napoleon’s forces. My actions altered the chance of Wellington’s victory, but the actions were not temporally located between the cause (Wellington’s charge) and the effect (Wellington’s victory), as the CTC requires.”

    I don’t see how this is a counterexample. The constraint reads “If ch plays a role in the causal relation between c and d, then t(e) ∈ [t(c), t(d)].” I agree that your killing lots of Napoleon’s soldiers (e) is a contributing cause of Wellington’s winning (d). But I don’t see how the chance of your achieving e beforehand is relevant to the causal transition between Wellington charging (c) and winning (d). By the time c occurs, e has already occurred and is part of the fixed background.

    To see what the CTC was intended to exclude you have to think of a chance that changes over time. So suppose ch=.5 but ch=1 (the coin lands heads, and so the chance of heads goes to 1 forever after). Now suppose we are looking at a causal scenario that arises after the coin has landed. Perhaps someone who did not see the coin land has placed a bet on the outcome after the fact, and we are looking at the transition from wager to payoff. The claim is only that the .5 chance is not relevant to this causal transition, as the .5 chance has already departed from the world.

    2. On the FP I didn’t give a Big Bang argument at all, much less one that mentions t0–my ‘argument’ was just to approvingly quote Lewis’s ‘lost in the labyrinth’ example. So not sure what is going on there.

    I do eventually use the FP to argue against ‘initial condition chance.’ But if you like the Big Bang models without t0, you won’t believe in initial condition chance in the first place!

    3. The big issue here seems to me to be your move to relative chances. I think this is an interesting and not implausible idea–only I’m not sure you guys are appreciating how radical it is! As I was viewing the debate, it was a debate over whether there could be objective chances of event occurrences other than 0 or 1 in a deterministic world. If all chances are relative chances this entire debate is based on the false presupposition that event occurrences HAVE absolute chances. If they only have relative chances then all you can say is that the chance of this event occurring relative to this body of information E is x, and the chance relative to E’ is x’, etc. There is no longer any fact of the matter as to the real chance of this event occurring (simpliciter).

    What makes this move so radical is that the notion of ‘the absolute chance’ seems to play a pivotal role in a wide range of areas. For instance, causation seems to involve something like chance-raising. Possibility seems to accrue to all non-zero chances. Chances seem to go to 0 or 1 as they pertain to the past. All of these ideas prima facie involve absolute chances. Indeed if you like the paradigm case argument about coin flips giving .5 chances (Luke at least seems to have some sympathy with this) this looks like an absolute chance. If there was any relativization of the chance it at least went unsaid. If you move to relative chance you have to reconstruct all of this. Not saying it can’t be done, only trying to point out how much needs to be done to make this fly…

    So I see you not as defending deterministic chance, but as rejecting the entire debate between Loewer (who I had in mind as the primary defender of deterministic chance) and people like Lewis and myself. For what it is worth you might look at fn. 3 of my paper:
    “Treating the input to the chance function as a triple embodies the substantive assumption that the chance function needs no further inputs (such as a reference class). This assumption is (for the most part) common ground in the deterministic chance debate… [W]hether determinism is compatible with a ‘chance’ function that is relativized to reference classes or other further inputs should be considered a separate question not addressed in the main text.”

    4. Even granting (for the sake of the argument) that all chance is relative chance, I still think there is a case to be made for incompatibilism. For not all relativizations are equal. There are some relativizations (e.g. to the tarot card reading) that are utterly informative. There are some relativizations (macro-info) that are only partially informative. But there is a special sort of relativization (micro-info) that is specially informative.

    Start with the Principal Principle. There is still a big asymmetry between the macro-chance and the micro-chance relativization, in that the micro-chance TRUMPS the macro-chance. Indeed the micro-chance has the following special and interesting role to play. Given no inadmissible info about the future, the micro-chance LOCKS rational credence. It trumps everything else:
    (a) The rational credence to invest in e given the micro-info plus any other admissible info = the rational credence to invest in e given the micro-info
    But
    (b) The rational credence to invest in e given the macro-info plus any other admissible info =/= the rational credence to invest in e given the macro-info

    Thus not all relativizations are equal. This needs explanation. The natural explanation, it seems to me, is the micro-chance relativization is special BECAUSE it is the full fact of the matter at the time. The macro-chance relativization, in contrast, gets trumped BECAUSE it is not the full fact of the matter, and hence only an ignorance measure.

    Now look at the Realization Principle. There is a similar trumping asymmetry between the macro-chance and the micro-chance relativization:
    (c) If e’s occurrence has a non-zero micro-chance then e’s occurrence is possible (given laws and history)
    (d) NOT (If e’s occurrence has a non-zero macro-chance then e’s occurrence is possible (given laws and history))
    As far as what is really possible, the macro-chances get trumped by the micro-chances. If the chance of heads is .5 given the macro-info but 0 given the micro-info, then heads is not really possible. Again not all relativizations are equal.

    (I’ll leave the other platitudes out for now, but I think one can make a similar case all the way through.)

    5. I have to disagree with Luke’s idea that chances are JUST lawfully projected probabilities. Lawfully projected probabilities are cheap because probabilities are cheap. We can take various measures over phase space and construct all sorts of probability functions out of them. But surely THOSE aren’t chances in any interesting sense. So I think that–for exactly the same reasons why we need further substantive platitudes about chance that go beyond the formal apparatus of probability functions, so we need further substantive platitudes about chance that go beyond being a lawfully projected probability.

    There is a deep and difficult underlying question as to what the right platitudes are. I don’t really know of any principled way to argue here. I was assuming that most people would be on board with the Principal Principle and Realization Principle etc., and just drawing consequences from those. Obviously at this stage in the debate this is coming into question. So I leave you with the following question (/plea for help):
    When two philosophers debate which platitudes govern a given concept, what other than raw appeals to ‘it feels so right’ can we go on?

  3. mrogblog says:

    Thanks for such an in-depth reply Jonathan!

    On 1), we either misunderstood what the CTC was doing, or I’ve misremembered John’s example; I’ll ask him again about it. I have no problem with the CTC as you’ve just explained it (I found the discussion in the paper a little bit cryptic).

    However, I don’t quite see why the CTC needs to be an independent constraint. Aren’t cases where non-trivial chances have ‘departed from the world’ excluded from being relevant to later causal transactions by the RP? Once the chance of (say) heads is 1 at t in w, then there are no worlds which share the micro-events up to t and the micro-laws with w in which a wager on heads does not pay out. So the non-trivial chance is excluded from making a difference to the wager-payoff transition, since *nothing* in w before t makes a difference to the wager-payoff transition (given that the chance of heads of 1 at t is held fixed.)

    On 2), this was a typo/brain failure – I meant to refer to your argument that initial chances cannot satisfy IR. But of course, you’re right that this argument would be unnecessary if there is no such thing as the initial condition of the universe. A case of Frank trying to be too clever for his own good!

    We both agree that 3)-5) are where the action is.

    On 3), I’m not so sure that all participants in the debate have been operating with the assumption that the chance function needs no further inputs. It’s a charitable interpretation of the Loewer quote that you give on p.135 that he was thinking of the kind of relativized-admissibility proposal which I suggested here.

    You say the ‘absolute chance’ plays a pivotal role in a wide range of areas. But why can these areas not inherit the kind of relativity of relative chances? Maybe causation is scale-relative in a way which fits with the admissibility-relative status of chance – I don’t find this at all implausible, since causal talk seems entirely inappropriate at some scales (eg that of quantum field theory). And the moving of chances to 0 or 1 can be explained neatly by embracing the idea that some chances trump others – more on this in a moment. Obviously more work needs to be done to explicate all this, but there seems no reason why we can’t have relativized chances with correspondingly relativized applications.

    You say in the paradigm case argument any relativization of the chances went unsaid – right. I’d have thought that typically the context will be enough to fix the appropriate relativization – when dealing with coin flips, the correct level of analysis is the idealized theory of coin flips where freak results (like landing on an edge, or disappearing in a puff of wavefunction) are discounted, and the macroscopic symmetries of the coin dominate. Or at least this is usually the correct level of analysis. Sometimes, like when designing a coin-flipping robot which will always flip heads, others level are appropriate.

    On 4), the tarot-chances are no counterexample to the relativization proposal. The theory which delivers objective chances still has to be a correct theory for the ‘chances’ it delivers to count as objective chances. There are plenty of examples of indisputably correct theories dealing in macroscopic chances (the theory of the probability that the next card drawn will be x in a poker game, for example). Similarly, there are plenty of indisputably false theories dealing in fundamental micro-chances (say the theory that every time a radium atom decays, it has a 50% chance of turning into an elephant.)

    I agree that trumping of some chances by others is an important phenomenon to be explained. If we assume a fundamental level to reality (like you, I’m not sure this assumption is well-founded), then there will be a form of chance (bottom-level micro-chances) which trump all others – and in a deterministic world, these will be either 1 or 0. The right kind of explanation for this will appeal to the inter-theoretic relations between the special sciences which ground the various levels of chances.

    “The natural explanation, it seems to me, is the micro-chance relativization is special BECAUSE it is the full fact of the matter at the time. The macro-chance relativization, in contrast, gets trumped BECAUSE it is not the full fact of the matter, and hence only an ignorance measure.”

    I can agree with all of this, except the ‘only’ in ‘only an ignorance measure’. The way I see this, some chances can be both based on microphysical ignorance AND be genuinely objective chance. In a sense, on many metaphysical pictures (eg the Lewisian picture) even micro-chances are ignorance measures in a (self-locating) sense – we don’t know which world we’re in, but we’re (timelessly) in one world and there’s nothing we can do about it. Objective chances help us to figure out stuff about the world we’re in, but they don’t let us pick one out uniquely. The moral I want to draw from this is that it’s compatible with a conception of chance as genuinely objective that it also be based on a kind of uncertainty. The question then becomes whether the platitudes are compatible with this kind of uncertainty-based objective chance. I think they are, when they are suitably relativized themselves (as I gesture at in the post.)

    On 5), I agree the question of which platitudes are right is difficult, but I’m not sure it’s deep – I’m unconvinced that the ‘ordinary-language platitudes’ approach is really the best way to go. How do we know whether the platitude that ordinary people are agreeing to is the relativized version or not? I don’t think ordinary people’s intuitions are clear at all on that. Maybe instead we just need to look to the sciences to see what kind of conception of objective chance gets used there. And since scientists talk about chance in statistical mechanics, I’m inclined to think they’re thinking of relativized chance.

  4. […] – One thought I had about the original ‘Similarity’ proposal – if we read ‘laws’ not always as fundamental microphysical laws, but (depending on context) various kinds of more emergent laws, we could save the proposal without having to introduce quasi-miracles or typicality. A plate flying off sideways may or may not be a violation of quantum-mechanical laws, but it is certainly a violation of the Newtonian laws which hold to a good degree of approximation at the macroscopic level. And maybe these Newtonian laws are the salient laws for consideration of the counterfactual. Similarly with counterfactuals like ‘if I drop this icecube into that mug of hot tea, the icecube will melt’. We don’t even need to go to QM to get counterexamples to this; statistical mechanics describes certain highly-unlikely scenarios where the molecular impacts conspire to prevent melting. But if the salient laws are thermodynamic laws, then the cube must melt. This suggestion is in the spirit of the proposal about deterministic chance I discuss here. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s