What is Intelligent Design? Ask the average proponent of Darwinian evolution, and you will get Creationism. She will tell you ID is nothing more than a god-of-the-gaps scheme concocted by a bunch of fundamentalist Christians. Ironically, if you ask the average believer the same question, you’ll get a complimentary version of the same answer! When it comes to a real understanding of ID, neither side has done the heavy lifting. It’s easier for opponents to excommunicate Intelligent Design from Science, and for those who believe in a Creator to accept it as a given. Yet those at the forefront of Intelligent Design are adding to our understanding of the world; certainly more so than critics give them credit and probably less than most theists think. The high road in this debate is neither ad hominem attack nor tacit support. So what is Intelligent Design? Straight from the Discovery Institute, the leading ID think-tank:
The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection[i].
Skeptics have a lot to say about ID. They typically focus more on perceived motivations and potential societal impact than on the actual concepts of ID. I would summarize their position as follows:
ID invokes an intelligent cause, which we all know is God. Since science only deals with natural causes, ID is not real science. Furthermore, invoking an intelligent cause for life and the universe hinders scientific inquiry and discovery. An intelligent cause is beyond scientific investigation and therefore adds nothing to our understanding of the world.
The first thing to notice is how design-causation is misconstrued with an ultimate creator. They also confuse the methodology ID scientists employ with ultimate conclusions one might reach from their findings. A simple analogy will clarify:
Imagine a forensic scientist who is asked to examine a deceased man in order to find the cause of death. The cause may be natural, or it may be the result of foul play–an intelligent cause. Let’s further imagine the man died from a rare toxin that entered his bloodstream and worked its way up to his heart, causing cardiac arrest. Finally, let’s assume the conclusion from forensics, in this case, is murder. If correct, then clearly, the direct cause of the man’s death was human intelligence and not natural processes. Does this mean the methods employed by the forensic scientist to determine the cause were unscientific? Of course not. Does this mean further studies in medicine, heart disease, or the circulatory system should grind to a halt because of his findings? Obviously not. What about our understanding of the world? We might not gain scientific knowledge in this case, but we certainly learned something very important–the cause of the man’s death.
The attempts by critics to cut ID off at the knees are hardly convincing. But perhaps the work ID proponents are doing isn’t science. So let’s take a closer look and delve into ID theory to see if we can find something substantive. Foundational to ID is William Dembski’s concept of Specified Complexity, which essentially denotes the two hallmarks of design: complexity and a specified pattern. Before I go into this in more detail, it is worth noting there is a vast amount of criticism, disinformation, and polemics on the web from those who loathe anything ID. But what you will not find in the criticism is any recognition that design-detection is something all of us do regularly. If you see an arrowhead in the woods, you immediately recognize it as something man-made and not the product of natural forces and erosion. Since on the average critic’s universe, our minds are nothing more than biochemical computers; what sort of processing do you suppose goes on when we see an effect and infer a design-cause? Perhaps the process could be discovered, understood, and formulated. That is precisely what Dembski and others are trying to do.
The Explanatory Filter
Dembski’s explanatory filter is used to infer a design-cause from an effect. It is configured to prevent false-positives by giving necessity and chance the benefit of the doubt. This configuration means the filter allows false-negatives through, where design is not detected. A good bit of modern-art might not make it past chance, for example. That is, the filter might not distinguish an intentional set of splashes of paint on a canvas from several buckets of paint falling off a ladder onto a canvas. This limitation is not a problem; it’s false-positives we want to avoid.
The following, which I call the mountain archer analogy, explains how the filter works. Imagine an archer shooting an arrow off the top of a mountain down into a valley ten square miles in size. Further, imagine the archer is so far up the mountain, the arrow could reach any spot on the valley below.
- Hitting the valley is a high probability (HP) and follows necessarily from initial conditions and the law of gravity. The archer could fire over his shoulder, blindfolded, and still hit the valley.
- Hitting one of a small number of trees in the valley, the archer was not aiming for is an unspecified intermediate probability (IP) – not exactly what one might expect, but certainly within reach of chance.
- Hitting a stream running through the valley the archer was aiming for is a specified intermediate probability (Spec + IP) – the filter would chalk this up to chance and register a false negative even though this was a good shot and involved an intelligent cause. But the archer could have been blindfolded and got lucky.
- Hitting a particular pebble the archer was not aiming for is an unspecified small probability (SP). There are lots of pebbles in the valley, and even though hitting a particular one is a small probability event, it is not unlikely to hit a pebble.
- Hitting a particular pebble that you had earlier painted a bulls-eye on is a specified small probability (Spec + SP) and would make it through the filter to design. The archer is either an incredible shot or a good magician – either way, we have a design-cause.[ii] No one in their right mind would attribute such an event to chance.
Probabilistic Resources
Dembski introduces probabilistic resources, which include replicational (RR) and specificational resources (SR). Probabilistic resources comprise the relevant ways an event can occur.[iii] RR relates to the number of samples taken. In the above analogy, it could be the number of shots fired. SR refers to the number of opportunities or ways to specify an event. Using the same analogy, it could be the number of pebbles with bulls-eyes. The greater number of pebbles with targets or the greater number of shots fired, the greater the probability of hitting a target.
Universal Probability Bound
If the marked pebble has a surface area of one square inch, then the odds of hitting it at random are roughly 1 in 2.5e11 or one in 250 billion[iv] – about 3000 times less likely than winning the Power Ball lottery with a single ticket. Even with such improbable odds, critics would argue that this is still within reach of chance. This remaining doubt is where Dembski introduces his universal probability bound (UPB) – a degree of improbability below which a specified event of that probability cannot reasonably be attributed to chance regardless of whatever probabilistic resources from the known universe are factored in.[v] The UPB is 1e150 (one with a hundred and fifty zeros after it.) The odds 1 in 1e150 are so small; it would be about as likely to win the Power Ball twenty times in a row with one ticket each! Something even the contrarian realizes would be the result of intelligence and not luck.
But what does Dembski mean by: regardless of whatever probabilistic resources from the known universe are factored in? Here he is basing the limit of his probabilistic resources on the maximum number of processes or functions that could possibly be executed. This maximum is based on the product of the number of elementary particles in the known universe (1e80) repeating every instant (1e45 per second [based on Plank time]) since the beginning of time in seconds (1e25) = 1e150. This seems like overkill, but apparently, you need this to overcome skepticism.[vi]
But does the skeptic need this much overhead? Take, for example, the estimated number of grains of sand on all of the beaches on earth. Say I traveled to a random spot on a random beach and dug down and marked a single grain of sand. Now, if you go to a random beach anywhere on earth, to a random spot, dig to a random depth (up to 5 feet), and grab a random grain, the odds of it being the same grain as the one I marked are estimated at one in 10e18. A rational person would never believe this would happen by chance. Even so, those odds are 132 orders of magnitude better than one in 1e150. The rational position is to realize there comes the point where theoretical-possibility must give way to practical-possibility. The odds one in 1e150 are not zero, so a specified, small-probability event at this scale is not theoretically impossible, but it is rational to conclude its practical impossibility.
Dembski’s filter appears to be sound. But there is another criticism from detractors: affinities and constraints in the probability landscape can create the appearance of design completely by chance. Say, for example, the archer shot multiple arrows at random, each with a long string of equal length. The resulting semicircle pattern on the valley below might be considered a design-cause since it is unlikely such a pattern would emerge at random. Such an inference through Dembski’s filter might fail to see the probability landscape greatly reduced by the constraint (the string) so that each event necessarily falls within a semicircle swath in the valley below. And perhaps there are unknown laws governing the universe where affinities and constraints shape chaos into order. That’s where ID still has its work cut out for it. However, we are rightly incredulous of such life-principles affecting the probability landscape emerging as mere furniture of a material universe that has no intention of producing life. Rocks not only dream of nothing, but they also intend nothing, certainly not conscious observers.
[i] http://www.discovery.org/csc/topQuestions.php#questionsAboutIntelligentDesign
[ii] This analogy does not take into account Dembski’s universal probability bound of 1e-150 which is over 138 orders of magnitude more stringent than the odds in this analogy
[iii] The Design Inference, William Dembski, pg.181
[iv] This assumes an equal probability of hitting any location across the valley below which in real life would not be the case – for example, if you could hit the corners you could likely land outside the valley as well.
[v] ISCID Encyclopedia of Science and Philosophy (1999)
[vi] This seems straightforward in terms of replicational resources but I do question the validity of also including specificational resources here. Samples repeated as quickly as physically possible in every conceivable location in the universe since the big bang does seem to set an upper limit for replicational resources, but I do not see the relation to how specifications can be varied. Imagine every elementary particle in the universe has a piggyback random number generator cranking out 200-digit numbers one every Plank-time since the big bang. One would reasonably expect that the significand of the square root of two had not been generated out 200 places. But what about the irrational square roots of any positive integer and their significands to 200 places? I’m sure SETI would consider a binary transmission of 200 digits of the square root of two to have an intelligent cause – but what about the square root of 3 or 7, etc?