Friday, 21 August 2009

Swinburne chapter 3, The Justification of Explanation

In chapter 3, The Justification of Explanation, Swinburne looks at the grounds for deciding whether a particular explanation of some phenomenon is a good one.

Swinburne is more interested in his scientifically unanalysable personal explanations, but starts out with scientific explanations. And he seems to have some very odd ideas about how scientific theories are developed.

He starts out by defining a term he uses a great deal from here on: prior probability. According to Swinburne:

The prior probability of a theory is its probability before we consider the detailed evidence cited in its support. The prior probability of a theory depends on its degree of fit with background knowledge (an a posteriori matter), and on its simplicity.

This is so wrong and backwards that it is hard even to start explaining how terribly bad this line of reasoning really is.

Firstly, the fact is quite simply that scientific theories aren’t devised this way. As a scientist, you don’t start with a theory and then go looking round for detailed evidence which might or might not end up fitting it. You start with the detailed evidence which is so far unexplained by existing theories, and see if you can work out a theory which explains it. Starting with a theory and then looking around for evidence for it is what many religious people imagine scientists do (as I have mentioned before), but I hadn’t expected somebody of Swinburne’s academic achievements to fall into this particular trap.

Next we have this dread word probability again. We aren’t dealing with statistical data, nor are we dealing with known causes. When we develop a new scientific theory, we are trying to elucidate previously unknown causes. The techniques of probability mathematics are entirely inappropriate here. When your causes are unknown, even if you think that you are developing a probabilistic theory, such as used in quantum mechanics, you do not and cannot evaluate the prior probability of a theory in this way.

And even in those cases where you do use probability and statistics a lot, such as in the analysis of clinical trial data to assess the effectiveness of some new drug, you can’t go back into your data and revise your hypothesis so that the data is now being used to answer a different question from the one you were asking before you collected the statistics, so that you get some kind of positive answer. Games like that make the mathematics go all wonky, even when the use of statistical techniques is appropriate.

Lastly, the scientific understanding of simplicity is quite different from Swinburne’s. A scientific theory is regarded as appropriately simple if it includes no more than is necessary to explain the phenomena in question and make predictions concerning the future behaviour of them and possibly also of other phenomena so far unobserved. So Newton’s theory of gravity is appropriately simple because it talks of a gravitational force, and describes its strength. It doesn’t make the claim that the sun exerts its force on the planets by sending out teams of invisible horses to drag the planets along their orbits. Such a claim (whether or not it happened to be true) offers no predictive power and no additional explanatory power relative to the phenomena addressed by the theory.

It is utterly meaningless to say that the theory would have been “simpler” had the gravitational force been inversely proportional to the distance between bodies rather than to the square of the distance. That greater supposed simplicity has no effect on whether the theory has a higher “prior probability” of being right before you look at the detailed evidence, because you already know that the detailed evidence doesn’t fit the simpler theory, and so you know (without any kind of evaluation of probability) that the simpler theory is simply wrong. In any scientific theory, you make our explanation as complex as is necessary to provide a generalisation which allows you to explain existing phenomena and predict future phenomena.

Swinburne then goes on to look at personal explanations and the use of prior probability. Leaving aside his dubious claim that personal explanations cannot be analysed scientifically, he is on somewhat firmer ground here, because there are lots of people in the world, and you can make statistical analyses of the sorts of things they do, and “prior probability” can return to its traditional meaning within the realms of statistical mathematics, particularly of Bayes’ Theorem. In other words, when making theories about humans behave, it is perfectly possible to create those theories in the form of P-inductive arguments.

But this doesn't help much, since Swinburne isn’t much interested (at least not in this book) in the prior probability of events caused by humans. He is interested in the “prior probability” of events of unknown ultimate cause and which he thinks might have been caused by God. At this point he is back into the realms of serious abuse of mathematics and statistics. All the equations he quotes are all perfectly good equations – when used within their appropriate context. As far as I can tell, he hasn’t made any obvious mathematical howlers, though quite frankly I haven’t looked all that hard because it really doesn’t matter whether he has the algebra right or not. The use of Bayes’ equations in this context is totally inappropriate and any conclusions based on them are completely worthless.


  1. The term 'prior probability' is not meant to imply that it is something which actually describes the chronology by which scientists devise and test their theories. Rather, philosophers of science look retroactively at the way theories are replaced by new theories by the actual scientists, and then notice the certain "intellectual virtues" which are possessed by the new theories. The signature case of simplicity being implemented in the history of science is, of course, phlogiston being replaced by oxygen.

    So the question of 'prior probability' as discussed by philosophers of science can be applied to theories retroactively. Perhaps it would help that it is sometimes called 'intrinsic probability', denoting the fact that it is the probability of a theory before the data is examined.

    Swinburne's account of simplicity is that a theory is simple if it contains few entities, few kinds of entities, each with few properties, and with mathematically simple relations between them.

    Whether the theory leads us to expect the data is, of course, a separate question from its simplicity.

  2. Philip
    The thing is, the term "prior probability" isn't used by philosophers of science in the context you describe.

  3. It doesn't matter whether or not philosophers of science use this or that term; the concept is used, and it makes sense.

    If two competing theories have the exact same explanatory power, yet one wins as a theory in people's minds, there must be some reason it outdistanced its opponent. Furthermore, the reason must be an intrinsic feature of the hypothesis, since all the other variables between the two are the same.

    And simplicity does seem to be a good criterion, for you can of course come up with a thousand other more complicated theories to explain some datum, but they will all more improbable since they appeal to so many things not entailed or implied by the datum.

  4. Philip
    Swinburne's use of the term "prior probability", in a situation involving no statistical data but making an explicit reference to Bayes' Theorem, makes no sense at all.

    Merely because the sentences Swinburne writes are grammatically correct doesn't mean that all he writes is meaningful.

    Regarding the persuasiveness of competing theories, I would suggest that factors quite distinct from the truth of a theory have an impact on whether it lodges in the minds of a large number of people. Merely because many people find an idea persuasive isn't of itself evidence of its truth.

    As for "simplicity" as a distinct term separate from "prior probability", I pointed out that Swinburne has a mistaken understanding of that as well, in terms of how scientists evaluate the simplicity of their theories. Your understanding of the word seems to be better than Swinburne's and closer to the scientific understanding and the example I gave concerning Newton's laws.

  5. Of course scientists use prior probability, as defined by Swinburne. We all do. If you claim to have developed a machine that produces energy from nothing, for example, I wouldn't bother looking at your reasoning, or collecting data to contradict it - I'll point out it contradicts the laws of energy conservation (priori, a posteriori) knowledge, and reject it on that basis (i.e. assign a negligable probability for it being true). If, however, you hypothesized that high temperature superconductance is due to the combination of magnetic bands and cooper-pair condensation mediating each other in the complex structures - I'll ask you to prove it, but I'll be willing to contemplate the truth of it (it's a more likely theory).

    The thing is, that this "prior probability" is nothing but an attempt, at least, to more rigorously represent our current knowledge. It is a reflection of our knowledge, and our misconceptions, not of truth. That something has a high "prior probability", as defined by Swinburne, means nothing but that it seems like a worthwhile theory to investigate; it doesn't imply that it's actually true, only that it fits within our current knowledge (and ignorance and error).

  6. That something has a high "prior probability", as defined by Swinburne, means nothing but that it seems like a worthwhile theory to investigate; it doesn't imply that it's actually true, only that it fits within our current knowledge (and ignorance and error).
    But Swinburne doesn't use prior probabilities in that way. In fact he interchangeably uses two distinct terms "prior probability" and "intrinsic probability". Prior probability is a term defined within Bayes' Theorem and has a very specific meaning, quite different from that which you suggest Swinburne is using. Swinburne appears to have coined the phrase "intrinsic probability" in the terms you have described, but then treats it as it if is the same as prior probability, and assigns numbers to it, clearly suggesting that intrinsic probability is some measure of the truth of the proposition.

  7. Well, I can't be blamed for Swinburne's foolishness, nor am I trying to say he's a great philosopher (which clearly isn't the case). I'm just pointing out that something like "prior probability", as defined by Swinburne, is actually a useful and in-use concept.

    That said, I did not mean to imply that high "prior probability" in this sense is unrelated to truth; it is just not indicative of truth (it is indicative only on fitting current knowledge/thought). Said probability *is* the "prior probability" that fits into a Bayes' Theorem in the sense that it is the probability we assess before hearing further evidence, to be updated by Bayes' Theorem after hearing more evidence. I don't see fault in Swinburne's use of Bayes' Theorem to assess likelihoods that something is true, it is rather his probability assessments themselves that are problematic. Specifically, he seems to forget that the "prior" probability as he himself defined is modified by current knowledge ("an a posteriori matter"), it isn't "prior" in the philosophical sense - it isn't "intrinsic" (its probability relies on further things, not just on itself). The addition of further data changes prior probabilities anyway, so that intrinsic probabilities are not that interesting; we judge plausibility based on our current knowledge, not in ignorance of it (based on "prior" probability, as Swinburne defines it, not on "intrinsic" probability, for which you didn't quote a definition but I'm sure is probability sand any evidence).

    Again - it is valid to try and construct a "probability space" that will allow you to judge different hypothesis, even though that space doesn't represent real events. It is just that real scientists (and philosophers) define the probability measure on this space in accordance with our knowledge, not our ignorance, and attempt to make good assessments of probability, not ones biased to give a certain result.