Deterministic Chance

We had a thoroughly successful MLE seminar today on the subject of objective chance in deterministic worlds. Lewis influentially insisted that deterministic chance was simply incoherent – that the only objective chance in such worlds could be 1 or 0. This conclusion seems fairly intuitive, but it doesn’t give a satisfying account of the chanciness of the special sciences. Classical statistical mechanics, in particular, presupposes determinism at the lower level, but produces probabilistic predictions. It doesn’t feel right to say these chances are ‘merely’ epistemic, as Lewis does.

So there’s been lots of work in recent years to rehabilitate the idea of deterministic chance. Barry Loewer in particular has treated making sense of deterministic chance as a precondition of making sense of chance. Schaffer has recently defended the Lewisian line, and his paper was the one under discussion.

One worry I initially had was that Schaffer’s presupposition that information about the laws of nature is admissible is incompatible with the Humeanism about laws he advocates. This worry ends up just being equivalent to the problem of undermining futures which led Lewis to the ‘new principal principle’. Although I think this remains a decisive argument against Humeanism, it’s not relevant to the main aims of Schaffer’s paper, so I’ll say no more about it.

There were some worries about how far the ‘platitudes’ about objective chance (which Schaffer appeals to in arguing that the best chance-candidates in deterministic worlds are 0’s and 1’s) are really platitudinous. We ended up satisfied that FP is platitudinous, but unconvinced by CTC – in fact, CTC seems false, as the following example suggested by John indicates:

I kill lots of Napoleon’s soldiers while they’re making their way to Waterloo. Wellington charges, and overwhelm’s Napoleon’s forces. My actions altered the chance of Wellington’s victory, but the actions were not temporally located between the cause (Wellington’s charge) and the effect (Wellington’s victory), as the CTC requires.

EDIT: this misunderstands either the CTC or John’s example. See Schaffer’s comment below.

Frank objected to the ‘Big Bang’ argument against the compatibility of IR and initial deterministic chances, on the grounds that in a Big Bang cosmology there is no first instant – time has the structure of an open set. We wondered, inconclusively, whether we could take a limit instead. EDIT: But as Schaffer points out in the comments, those who don’t believe in a first instant won’t be able to appeal to initial deterministic chance anyway.

Now to the main issue I want to discuss. Can the ‘objectively informed but still epistemic’ chances which Schaffer discusses count as objective chances? Lets consider three kinds of these ‘objective epistemic chances’ – chances in a poker game, chances in classical statistical mechanics, and chances in Bohm’s version of quantum mechanics.

In the poker game, what the next card will be is fixed from the start by the way the deck is shuffled. But this doesn’t mean that there aren’t correct and interesting probabilities that an experienced player can calculate and use to his advantage. These probabilities presuppose ignorance of the order of cards in the deck, but that ignorance is part and parcel of the game of poker. Playing within the rules, the poker-chances play the role of objective chances; it’s only when we go beyond the game, and ask for information inadmissible according to the rules (the actual order of cards in the deck) that the poker-chances are trumped by the underlying deterministic mechanism.

Now consider classical statistical mechanics. Here, the future evolution of a system is fixed by its microstate, but we typically know only its macrostate. While we are ignorant of the microstate a system is in, the CSM statistical chances play the role of objective chances, but were we to be informed of the exact microstate the chances would become superfluous – we could use the deterministic mechanism to work out the future evolution of the system with certainty.

Similarly, in the Bohmian case, the actual future is fixed by the precise positions of the Bohmian corpuscles. But (assuming an equilibrium distribution of these corpuscles) it’s impossible for us to measure these precise positions. The information is inaccessible to all intents and purposes, so the Bohm-chances play the role of objective chances. Unlike poker, it’s physically impossible to obtain the information which would trump the Bohm-chances.

This points to a notion of admissible information which is, roughly speaking, relative to the rules of the game. In poker, the rules make the information about the order of cards in the deck inadmissible; finding out the order would allow us to dispense with the poker-chances, but amounts to cheating. In CSM, the in-practice impossibility of finding out the exact microstate of a system would allow us to predict its evolution with certainty. In Bohm theory, finding out the exact position of the particles would allow us to dispense with the Bohm-chances, but this is physically impossible.

Looked at this way, Lewis’ and Schaffer’s inability to accept deterministic chance arises from a fixed criterion of admissibility. But sticking to absolute admissibility seems unmotivated. The original account of admissibility given by Lewis was, by his own admission, not a rigorous one; but he allowed all historical information to be admissible (except in pathological cases, such as cyclical time). This immediately gives the game away; if historical information is admissible, so is information about the deck of cards when playing poker, so is information about the microstate when doing CSM, and so is (physically inaccessible) information about the exact position of corpuscles when doing Bohmian mechanics. So where is Lewis’ argument that historical information is always admissible? I don’t think there is one – he offers it as a proposal. However, this proposal makes it impossible to think of poker-chances, CSM-chances, and Bohm-chances as genuine chances; so there is good reason to reject his proposal.

Schaffer’s argument against deterministic chance goes via the six platitudes. The kind of deterministic chances we get out of relativizing the admissibility relation are what he calls ‘deterministic macro-posterior chances’; he claims that such chances cannot validate the ‘principal principle’, the ‘realization principle’, or the ‘lawful magnitude principle’. I’ll take these in turn.

The principal principle connects credence with chance. Schaffer envisages someone who knows that (for example) the CSM-chance of an outcome is 1/2, but also knows the exact microstate of the universe and hence knows that the ‘newtonian chance’ of the outcome is 0. Obviously, such a person should set her credence by the newtonian chance, and not by the CSM chance. But the natural explanation of this is not that CSM chances are not genuine chances, but that they can be trumped by knowledge of lower-level chances; these lower-level chances are inadmissable relative to CSM. In cases where there is no information inadmissible relative to CSM available to the agent, the CSM chances do play the correct role in the principal principle. So this objection fails once we relativize admissibility.

The realization principle says that, if the chance of an event at time t at world w is non-zero, there are worlds which match w perfectly up to t, and which share its laws, in which the event occurs. Schaffer argues that a believer in deterministic macro-posterior chance will be committed via the RP to worlds existing which are ruled out by the deterministic micro-laws. The response here for a believer in macro-posterior chance is to deny that the correct version of RP involves a perfect match up to t. He should instead say that the correct version of RP involves only a perfect match as regards all admissible information up to t. This principle reduces to Schaffer’s version if all historical information is admissible; but if only some such information is admissible then the new RP no longer poses any problem for deterministic macro-posterior chances.

A similar move rescues deterministic macro-posterior chance from the conflict Schaffer adduces with the Lawful Magnitude Principle. This says that if the chance of event e at time t at world w is x, then the laws of w entail that if the occurrent history up to t is H, then the chance of event e at time t at world w is x. This is just to say that chances are lawfully projected magnitudes. Schaffer argues that CSM chances will not be projected by the underlying deterministic laws. This is quite right – but the underlying deterministic laws are not the right ones to consider. The relevant laws are the CSM laws, which do project CSM chances. Similarly, the history which appears in the history-to-chance conditional should be a macro-history, not a micro-history, or we bring in information inadmissible by the lights of CSM.

The upshot of all this is that relativizing admissibility avoids the three objections Schaffer has to deterministic macro-posterior chances. His objections boil down to the single objection, that macro-chances can be trumped by knowledge of micro-chances – but if we relativise admissibility, this is no surprise, since micro-chances are inadmissible relative to the theory which produces macro-chances.

So we have two options – accept relativized admissibility, and allow both macro-chances and micro-chances to count as objective chance. Then the same world can contain chances of just 0 and 1 at some levels, as well as non-trivial chances at other levels. Or we stick with absolute admissibility and are forced to say that in deterministic worlds there are only trivial chances.

One interesting upshot of relative admissibility is that we can have chances of 0 and 1 in indeterministic worlds. Suppose, as Bohmians sometimes do, that there is an indeterministic micro-micro-dynamics underlying the deterministic Bohmian mechanics; there could then be non-trivial chances at the fundamental level, only trivial chances at the level of the corpuscle motions, and non-trivial chances again at the level of observable phenomena. This kind of picture should actually fit nicely with Schaffer’s denial that there has to be a fundamental level; but I have to think more about this.

Comments more than welcome!

(I should note that the proposal to relativize admissibility has a lot in common with a proposal of Luke Glynn’s, which (following Hajek) takes chances to be fundamentally conditional on histories (where histories could be either fundamental histories or special-scientific histories). I’m not yet sure whether the two proposals are equivalent, but they are in a very similar spirit.)

Advertisements
Deterministic Chance

Naturalness

Naturalness for properties is usually identified by what it does for us – the natural properties are those which are reference-magnets, play roles in scientific or causal explanation, are potentially ontologically fundamental, and so on. How much is being smuggled in by the use of the term ‘natural’? I can think of at least three different available readings for the word ‘natural’ in the phrase ‘natural property’. I suspect people are differentially motivated by these different readings.

First, a property could be natural by not being ‘supernatural’. Just about any mundane everyday property fits this criterion equally well – being a table is just as non-supernatural as being an electron. This reading seems not to do justice to most uses of the notion of natural property, since it doesn’t distinguish grades of relative naturalness, and allows (for example) gerrymandered distributional properties to be as natural as intrinsic fundamental properties. Grue comes out perfectly natural. This can’t be the notion of naturalness at stake here.

Second, a property could be natural by not being artificial. The motivating idea for this is that natural properties are discovered, non-natural properties are invented. But this reading seems to change the subject somewhat, bringing in the realism/nominalism debate where we might have hoped to avoid it. It also makes it difficult to allow for degrees of naturalness; unless complex properties can be part-discovered, part-invented?

Third, a property could be natural in the sense of being a natural choice for application in our theories – this is nicely neutral between realism and nominalism. This reading does seem to get a handle on degrees of naturalness: if we’re looking to describe or manipulate reality, green is definitely a more natural choice than grue in the vast majority of circumstances. However, this form of naturalness seems not to be intrinsic to the property – prima facie, it is a complex relation between a theorizer, their circumstances, and a property. It’s not clear whether this sense of naturalness is unacceptably anthropocentric, though I’m inclined to think it isn’t. It’s also not clear that we can’t pose a further demand for explanation of this notion of naturalness – what makes a particular property natural in this sense? The worry is that a notion of meta-naturalness will be needed, leading to regress.

Fourth, a property could be natural in the sense of actually being part of naturalistic inquiry. ‘Naturalized epistemology’ is born of the metaphor of the philosopher becoming naturalized into the scientific community, as refugees are naturalized into the country which receives them. On this more deflationary view, a natural property is one which has widespread application in our mature scientific theories. The interesting thing about this is that a property could be natural in this sense without being ‘ontologically fundamental’ – maybe science, despite its success and maturity, gets some details wrong.

Someone must have written about this contrast already – anyone any idea where? I’m interested mainly in the latter two readings, but I get the feeling that the first two readings are often projected on to the natural property debate, leading to actual or at least potential confusion.

Naturalness

Supersubstantivalism

I’ve been reading Schaffer on supersubstantivalism today (although he calls the position monistic substantivalism, which is probably a better name). He gives a typically eloquent characterization of the view, presents it as parts of a ‘package deal’ with other four-dimensionalist-style positions, and then defends it. The presentation makes it look like there are lots of arguments, but really they reduce to two main lines of thought:

1) (parsimony/harmony) There’s no explanatory gain to positing material objects over and above four-dimensional regions of spacetime; rather, doing so generates some demands for explanation which the non-supersubstantivalist cannot easily meet

2) (parsimony/field theory) Best physical theory involves ubiquitous and indispensable reference to fields as ontologically fundamental, and these fields play the material-object role so well that it is otiose to add objects over-and-above the fields.

I think both these motivations are good ones (although 2 seems more compelling than 1) and I’m antecedently inclined to buy the package deal which Schaffer recommends. So I agree with his main conclusions.

Something that occurred to me while reading the paper was how this debate interlocks with the debate in philosophy of physics over how to respond to the ‘Hole argument’. Briefly, there are three alternative versions of substantivalism which survive the hole argument:

a) Haecceitistic indeterminism, which embraces indeterminism about the haecceitistic properties of spacetime points (models related by hole diffeomorphisms represent different possible worlds);

b) Sophisticated substantivalism, which embraces anti-haecceitism about spacetime points (models related by hole diffeomorphisms represent the same possible worlds);

c) Metrical essentialism, which makes the metrical properties of points essential to them (only one of a class of models related by hole diffeomorphisms represents a genuine possible world).

From what he says in the paper, it looks like Schaffer leans towards response c). I, and a lot of people at Oxford, find b) (or some structuralist variant on it) much more plausible. But supersubstantivalism plus b) leads to anti-haecceitism about material objects. Worlds differing only by a permutation of me and you, keeping all the qualitative features fixed, are not really two distinct worlds at all – this follows directly from anti-haecceitism about spacetime points and the conception of material objects as spacetime regions. Anti-haecceitism about material objects isn’t obviously part of Schaffer’s ‘package deal’, and a lot of metaphysicians are paid-up haecceitists about material objects – so it’s a potential cost for his position that combined with b), it entails anti-haecceitism about material objects. A dualistic substantivalist, who thinks of spacetime as a container for matter, could combine anti-haecceitism about spacetime points with haecceitism about material objects located at those points. Supersubstantivalism rules out this combination.

Some non-substantivalist responses to the whole argument can also, surprisingly, be combined with ‘supersubstantivalism’. If we buy ontic structural realism about space-time points, identifying points with loci of the genuinely real relational descriptions, then we are driven to ontic structural realism about material objects also.

I expect lots of people have had these thoughts already, but I want to put something or other on this blog, so here goes..

Supersubstantivalism