The importance of irreducibly complex structures is that they cannot, Behe assured us, be built by Darwinism. Darwinism demands that each step in the long walk to the present structure be functional. But that can't be: since all parts are required for function natural selection couldn't possibly have added them one at a time. Irreducible complexity is therefore a reliable marker of intelligent design. This argument sold a lot of books and got tremendous media airplay. Unfortunately it was all wrong. Behe's claim was refuted—and in at least two ways. Both showed how irreducibly complex systems could be reached via gradual, Darwinian paths. Dembski calls the first path "scaffolding." At each step, a part gets added that improves a structure's function. At some point, however, a substructure might appear that no longer needs the remaining parts. These useless parts could then fall away. The key point is that the substructure we're left with might be irreducibly complex. Remove any part now and all hell breaks loose. The second path was one that I championed. Dembski calls it "incremental indispensability." Here's the argument: An irreducibly complex system can be built gradually by adding parts that, while initially just advantageous, become—because of later changes—essential. The logic is very simple. Some part (A) initially does some job (and not very well, perhaps). Another part (B) later gets added because it helps A. This new part isn't essential, it merely improves things. But later on, A (or something else) may change in such a way that B now becomes indispensable. This process continues as further parts get folded into the system. And at the end of the day, many parts may all be required. …The scaffolding and incremental indispensability arguments are not, Dembski says, causally specific. This means they have not, in any particular biological example, been fleshed out in sufficiently gory detail that Dembski can judge their validity. You might think scaffolding, say, can account for the bacterial flagellum but no one has told Dembski just which protein came first and which second...Orr’s two refutations of Behe’s irreducible complexity is really just one argument wearing different clothes. Note as he explains the “scaffolding” argument the critical leap that he, and virtually every evolutionist, makes: “At each step, a part gets added that improves a structure's function.” Did you catch that? A part gets added that improves a structure’s function. He wants to start working with a functional system from the get-go. Remember our long-distance, unachievable target? In Orr’s scenario we’re already there! So, at most, all he’s done is show how a pre-existing structure might be modified to work more efficiently. The other version of the argument follows a similar vein but instead of remaining parts drifting away and thereby leaving the substructure irreducibly complex, in this version existing parts alter to such a degree that newly added optional parts become essential. As Orr states: “Some part (A) initially does some job (and not very well, perhaps). Another part (B) later gets added because it helps A. This new part isn't essential, it merely improves things. But later on, A (or something else) may change in such a way that B now becomes indispensable.” Again we have the hurdle of initial function completely ignored. Some part (A) initially does some job. Orr still doesn’t understand, or just doesn't want to admit, that some parts don’t do some jobs unless they are first put together to do so. Granting Orr the benefit of the doubt we still are left with only a pre-existing structure that has been modified to do its initial job better. The essence of the real problem with Orr’s argument, though, is that it relies on fanciful ambiguity. “At some point,” “might appear,” “might be,” “some part… does some job,” “may change.” But hey, what does it matter if it’s ambiguous? After all, parts are just… parts. And every evolutionist knows that these parts are constantly being modified in a world of imaginary intermediate functional systems that gradually evolve into the irreducibly complex systems we see today. Or do they? Truth be told, all Orr has done in his so-called refutation of irreducible complexity is re-state the Darwinian process of survival of the fittest. He has yet to describe how a truly irreducibly complex system (i.e., his some part (A)) begins. Orr is aware that his imaginative so-called refutations lack any causal specificity but he responds by accusing the ID movement of hypocrisy due to the fact that their model also provides no examples of causal specificity. Hence, both the neo-Darwinian and ID explanations remain viable, for at best all Orr has done is show that both explanations are logically coherent, thereby inspiring further study into both realms. Note, however, that mere logical coherence does not indicate degree of probability. Consider the calculations done by Hubert Yockey on the probability of forming one protein, 110 amino acids in length, by chance. His calculations yielded a probability = 2.3 x 10^-75. What does that mean in terms of time? If you had 10^44 amino acids, all floating around in a primordial soup, and they had one chance per second to bond, it would still take 10^23 years to get a 95% chance of forming one functional protein (110 amino acids in length). It is entirely possible to have a sequence that is logically coherent yet probabilistically impossible. Dembski acknowledges this aspect in his book Intelligent Design in which he posits a probability boundary of 10^-150 with which to use in determining design. In other words, is there some point at which the probability of an event occurring by chance becomes so small as to essentially negate the chance occurrence of the event? Philosophers may debate whether we can make this claim, and mathematicians may debate exactly where this probability boundary should be set, but the concept itself is one that is commonly understood. Consider the movie Contact in which a radio signal from space was attributed to an intelligent agent because it contained a listing of the prime numbers from 2 to 101. What was the impetus for concluding that such a signal was from an intelligent source? The answer is simple. A radio signal with the prime numbers from 2 to 101 occurring by chance, while logically coherent, was considered to be so improbable as to negate mere chance as the driver. It seems that Orr would like to have his cake and eat it too. Zero chance and logical coherence are tricky things. If Orr has merely shown that irreducible complexity is logically accessible by Darwinian methodology, then he must allow for further inquiry into the concept of intelligent design since it too is logically accessible (just examine the methodology of any archaeologist or SETI researcher). If, on the other hand, he has only re-stated that the Darwinian methodology can, at most, modify an organism’s pre-existing function, then he is still facing Behe’s challenge from his book Darwin’s Black Box.
Saturday, March 20, 2004
Evo (part 2)...
This post is a continuation of the previous post and addresses H. Allen Orr's review of Bill Dembski's book, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Orr also touches on Behe’s claim that irreducible complexity cannot be reached via Darwinian evolution.