The Argument from Temporal Order looks as the fact that there are regularities in time - i.e. that one event causes another in a regular way, such that we are able to predict the future.
The argument from Spatial Order looks at the fact that atoms arrange themselves in regular patterns spatially to form a variety of substances. Although he makes a generally approving reference to Michael Behe’s “Irreducible Complexity” and describes the idea at more length in an appendix, Swinburne is careful to say that the argument from Spatial Order as he describes it does not depend on Behe’s ideas, and is fully compatible with Darwinian evolution. Swinburne’s version of the argument concerns inorganic materials of the right sorts to form themselves into organic compounds, and later into life and eventually into us humans. According to Swinburne, it is amazing that inorganic materials of the right sort, particularly carbon with its very special abilities at being organised into long chains in large molecules, should exist in the first place so that Darwinian evolution can get started.
The Fine Tuning Argument is about how amazing it is that natural laws are adjusted in such a way that multiple elements are created so that chemistry can happen.
The Argument from Beauty is that it would make no sense for God to make a universe that wasn’t sufficiently beautiful for us to appreciate it.
All these argument are made in much the same form. All these varieties of order are the sorts of things a God would create if he wanted to create beings like humans, and therefore the conditional probability of each of these is high given God, and the probability that these would arise in a Godless universe is accordingly much lower. Therefore each provides what Swinburne calls a C-inductive argument in favour of God. Remember from the previous chapter that this isn’t what most people think of as a C-inductive argument but instead it is a line of argument that suggests that the conditional probabilities, when fed through Bayes’ Theorem, will result in a higher than previous overall probability.
If you define a sort of God who you think would do these sorts of things, then of course you will assess as quite high the probability that God has done it. But for this to have much effect on the outcome of Bayes’ theorem (even if you accept this use of it, which I don’t) you would have to assess the prior probability as significantly greater than zero. Time for another down-to-earth example, this time taken from Ben Goldacre’s Bad Science, for no better reason than I’m a fan of his book and blog. I’m going to use his particular example of datamining to identify terrorist suspects.
In this example, Ben Goldacre is looking at the use of data about individuals to assess whether they should be considered to be terrorist suspects.
Let’s imagine you have an amazingly accurate test, and each time you use it on a true suspect, it will correctly identify them as such 8 times out of 10 (but miss them 2 times out of 10); and each time you use it on an innocent person, it will correctly identify them as innocent 9 times out of 10, but incorrectly identify them as a suspect 1 time out of 10.
On the face of it, that sounds quite promising. But let’s go on.
These numbers tell you about the chances of a test result being accurate, given the status of the individual, which you already know (and the numbers are a stable property of the test). But you stand at the other end of the telescope: you have the result of a test, and you want to use that to work out the status of the individual. That depends entirely on how many suspects there are in the population being tested.
If you have 10 people, and you know that 1 is a suspect, and you assess them all with this test, then you will correctly get your one true positive and – on average – 1 false positive. If you have 100 people, and you know that 1 is a suspect, you will get your one true positive and, on average, 10 false positives. If you’re looking for one suspect among 1000 people, you will get your suspect, and 100 false positives. Once your false positives begin to dwarf your true positives, a positive result from the test becomes pretty unhelpful.
I agree. A test that gives you the wrong result 10 times out of 11 is getting to be less than entirely useful. But it gets worse.
Remember this is a screening tool, for assessing dodgy behaviour, spotting dodgy patterns, in a general population. We are invited to accept that everybody’s data will be surveyed and processed, because MI5 have clever algorithms to identify people who were never previously suspected. There are 60 million people in the UK, with, let’s say, 10,000 true suspects. Using your unrealistically accurate imaginary screening test, you get 6 million false positives. At the same time, of your 10,000 true suspects, you miss 2,000.
So, using Swinburne’s notation, this means that that even if P(e|h & k) is very much higher than P(e|~h & k), if P(h|k) is very low, it doesn’t help you very much. In the terrorist example above, P(h|k) is 10,000 in 60 million, or 0.017%. Even though P(e|h & k) is greater P(e|~h & k) by a factor of 8, the false positives still swamp your numbers. The overall probability is still dominated by the prior probability, so processing the numbers through Bayes means that the probability that a person testing positive actually is a valid terrorist suspect has only been raised from 0.017% to 0.13%. That isn’t all that useful.
Swinburne, as part of his version of the Argument from Spatial Order, regards it is very unlikely that a godless universe would come into being sufficiently fine-tuned that would allow “humanly free agents” to appear, even through the process of Darwinian evolution. He helpfully defines what he regards as the necessary characteristics of humanly free agents as follows.
1. Sense organs with an enormous variety of possible states varying with an enormous variety of possible inputs caused by different world states
2. An information processor that can turn the states of sense organs into brain states that give rise to beliefs of prudential or moral importance
3. A memory bank, to file states correlated with past experiences (we could not consciously reason about anything unless we could recall our past experiences and what others have told us)
4. Brain states that give rise to desires, good and evil (desires to eat and drink, to care for others or to hurt them, and to discover whether or not there is a God)
5. Brain states caused by the many different purposes we have
6. A processor to turn these states into limb and other voluntary movements (to turn, for example, my purpose of telling you that it is Friday into those twists of lip and tongue that will produce an English sentence with that meaning)
7. Brain states that are not fully determined by other physical states
Clifford Longley quoted Professor David Deutsch out of context to support the fine-tuning argument in favour of God when Longley complained to the Advertising Standards Authority against the Atheist Bus Campaign. I took the trouble to contact Professor Deutsch at the time to find out his true views on the subject. He responded as follows.
I do not believe that the 'fine-tuning' of physical constants provides any sort of argument for the existence of God or anything else supernatural. That is because if the constants had been set intentionally by supernatural entities, then the intentions of those entities must themselves have been at least as 'fine-tuned' when they set the constants, and that fine-tuning would remain unexplained. Hence that supernatural hypothesis does not even address the fine-tuning problem, let alone solve it.
Think about that for a moment. Something as complex as a human needs quite a lot of explaining, and a universe with natural laws that permit Darwinian evolution to get going also needs much explaining. But God, as Swinburne describes it, has all of the characteristics of “humanly free agents”, on a far greater scale and with infinitely more complexity and capability than is possessed by mere humans. Humans have sense organs that can detect light, sound and smell passing through a very small corner of the universe. According to Swinburne, God has sense organs that can directly detect anything happening going on anywhere in the whole universe. Humans have a processor that can turn brain states into limb movements. But God has a processor that can by means of “basic actions” (i.e. unmediated direct actions) affect any atom in the entire universe. If the probability of a Godless universe capable of supporting humans is regarded by Swinburne as being very small because of the complexity involved, then Swinburne’s own argument against uncaused complexity applies equally and with even more force to the prior probability of God’s existence, or the intrinsic probability of theism as Swinburne describes it.
Swinburne earlier airily described God as being immaterial (i.e. not made of matter) and as being all of one substance (presumably an immaterial substance, whatever that might be) and therefore simple. But whatever substance God consists of, to be God he still needs all the capabilities Swinburne describes as being necessary for humans, but with a scale and capability vastly increased. Moreover, since God is omnipresent, omnipotent and eternal, by Swinburne’s definition there is not even any means analogous to Darwinian evolution by which God could have come into existence. It would all have to have been there in full from the very beginning. If it is improbable for a universe to exist uncaused with sufficient order and complexity to support a Darwinian process resulting in us, then a God with even greater capabilities coming about uncaused directly is even more improbable, by many, many orders of magnitude.
Swinburne of course says nothing about the effect of teleological arguments on the prior probability of God. He has already addressed in a previous chapter the intrinsic probability of theism, being the probability of God on no evidence at all, and has no intention of revising his estimates of it. He has declared God to be simple and that is that. Instead, he concentrates solely on the conditional probabilities of the universe given God’s existence or nonexistence.
But this hides the fact that the line of reasoning he follows renders the prior probability of theism so low that even deciding that the conditional probability of the universe being as it is, given God, is quite high, it isn't going to shift the Bayesian calculus very far in the right direction, even if Swinburne were to go the whole hog and say that the probability of God creating such a universe is 1.
So he’s continuing to make up numbers, but now he’s not even being consistent about how he applies his own rules concerning how he makes up his numbers.