Luskin, Miller, Dembski… Huh?

Over at the Discovery Institute weblog, Casey Luskin has managed to channel the spirit of Emily Littella in incomplete fashion. That is, Luskin has got the parts about misconstruing a situation and blathering on in outraged fashion down pat, but he never manages to figure out that he’s on about something that just doesn’t exist outside the confusion in his head to get to the, “Oh, never mind” moment.

Luskin prattles on about how ID critic and Brown University cell biologist Ken Miller is horribly mangling concepts from William Dembski. The hurt and outrage come through clearly; Luskin is nothing if not emotive in his prose form. Then, just to make sure that everyone can see the original offense, Luskin transcribes exactly what Ken Miller said.

Guess what? There’s no reference to Dembski whatsoever. There is mention made of hands in a card game. Dembski has used card game hands as an example before, though, so maybe Luskin thinks no one else in the history of human culture can refer to hands of cards without it being an allusion to Dembski. Or whatever. Who knows? But the whole aggrieved spiel simply has no foundation.

Besides which, Dembski is actually guilty of precisely the mathematical trick that Ken Miller discusses. In section 5.10 of No Free Lunch, Dembski waves away any consideration of evolutionary pathways in originating the E. coli flagellum, then spends a number of pages developing probabilities of such a flagellum spontaneously self-assembling… now consider Luskin’s transcription of Miller’s statement in that light:

One of the mathematical tricks employed by intelligent design involves taking the present-day situation and calculating probabilities that at the present would have appeared randomly from events in the past. And the best example I can give is to sit down with 4 friends, shuffle a deck of 52 cards, and deal them out, and keep an exact record of the order in which the cards were dealt. We could then look back and say ‘my goodness, how improbable this is, we could play cards for the rest of our lives and we would never ever deal the cards out in this exact same fashion.’ And you know that’s absolutely correct. Nonetheless, you dealt them out and nonetheless you got the hand that you did.

As I see it, either Luskin can have his long-delayed Emily Littella moment and urge all of us to “never mind”, or he can ‘fess up to the fact that Dembski says one thing, but does another[*]. Or, as seems likely, Luskin can ignore it all and count on few of the ID-cheerleading audience he reaches to fire up more than a couple of synapses over the matter and there leave it.

[*] Casey and I have already argued over whether Dembski provided a specification for an E. coli flagellum. I say that ‘methinks it is like an outboard motor’ doesn’t meet the criteria Dembski sets out that, supposedly, a specification will have (look around p.72 of NFL). Casey thinks otherwise.

Update: Ken Miller’s remarks were apparently introduced in the documentary by the narrator implying that Miller was addressing Dembski, which is where Luskin took his cue to rant from. Miller has responded, pointing out that his statements in the documentary were like his statements in testimony during the KvD case, not directed to Dembski’s “specifed complexity”.

Wesley R. Elsberry

Falconer. Interdisciplinary researcher: biology and computer science. Data scientist in real estate and econometrics. Blogger. Speaker. Photographer. Husband. Christian. Activist.

14 thoughts on “Luskin, Miller, Dembski… Huh?

  • 2007/03/25 at 2:56 am

    I regret to say that you and Miller are wrong here, Wesley. First, the criticism implied by Miller’s example is not that ID advocates are using uniform probability distributions when those are inappropriate. The implied criticism is that they are looking only at the probability of the specific observed outcome, and not at a larger class of possible outcomes. But Dembski’s concept of “specification” is intended precisely to address this issue. I’m not saying that Dembski addresses the issue successfully, but he certainly does not make the simplistic error that Miller attributes to ID advocates.

    Second, you point out that Miller does not attribute this error to Dembski. But I cannot recall hearing any of the other leading ID advocates making this error either. If they have done so, it has been at most a sporadic error and not a staple of the ID movement (though it is perhaps a staple error of Young Earth Creationists). And since Dembski is the leading probability theorist of the ID movement (and arguably the only one), it’s not entirely unreasonable for Luskin to take Dembski as the implied target of Miller’s criticism.

    Of course, ID advocates themselves are constantly making errors as bad as this one and worse. So their outrage over Miller’s mistake should not be taken very seriously. But we critics of ID, as people committed to rational consideration of the evidence, should not shrink from admitting that one of our number has made a mistake when that is the case.

    (I should add that I haven’t listened to the recording myself. I’m simply going by the quotation you’ve given above.)

  • 2007/03/25 at 3:48 am

    I’ve just watched the TV programme on Google Video. (I remember now that I watched it when it was first shown on the BBC.) The passage quoted above is immediately preceded by the following from the programme’s narrator:

    “In two days of testimony, Miller attempted to knock down the arguments for Intelligent Design one by one. Also on his hit list, Dembski’s criticism of evolution, that it was simply too improbable.”

    True, that’s the narrator speaking and not Miller. But the viewer would certainly be led to believe that Miller is responding directly to Dembski.

  • 2007/03/25 at 4:33 pm

    The “junkyard 747” argument is employed, repeatedly, by advocates of intelligent design, and is a mathematical trick of just the sort that Miller describes. If Luskin and Dembski are upset with this being attributed to Dembski, are they now acknowledging that it is a fallacy, and that anyone employing that argument is making a mistake? If not, then what the hell are they wailing about? And if so, then let’s see them explain just what is wrong with the fallacy, and point out that anyone who employs it is confused and ignorant. To help them out, consider the items one might find under a sewer grating. While we don’t know exactly what we’ll find, such items aren’t random; we know, for instance, that they will be small enough to fit through a sewer grating. The odds that, if we look under some specific sewer grating, we will find the keys to Clive Smith’s car are astoundingly small if we have no prior knowledge of the items under the grating or of Mr. Smith’s travels. But if we look under some sewer grating and happen to find some car keys, and they happen to belong to, say, Graham Peck, that is not astounding, as it is not astounding to find car keys belonging to somebody under a sewer grating. Advocates of intelligent design, including Luskin and Dembski, make both of the mistakes involved in this fallacy — confusing a priori and a posteriori probability, and mistaking the result of filtering (aka “selecting”) random items with a random collection.

  • 2007/03/25 at 5:10 pm

    I’ll leave it to you to have your *own* Emily Litella moment, Elsberry. That quote comes from the BBC Documentary “A War on Science”, where the documentarian introduces the the piece with the following words:

    “In 2 days of testimony [on Dover], Miller attempted to knock down the arguments of eIntelligent Design one by one. Also on his hit list, Dempsky’s criticism of evolution that it was simply too improbable. Miller: ‘One of the mathematical tricks…'”

    Barely without a beat between the two.

    It’s a legit response based on observing that documentary. Otherwise, what is he to conclude, that a ID-as-war-on-science documentarian lacks the depth to understand what *mathematical* concept Miller was replying to? That somebody, reasonably well-educated, with a healthy enough dislike for faith, who would otherwise serve the “pro-Science” worldview lacks the ability to even understand not the theory but the whole context that Miller offered his explanation against he *mathematical” (“chief”) arguments of ID?

    If you’re suggesting that the documentarian cannot even get right the context of this statement, do you get a lot of hope that he understands the debate? His seeming assent to the “War against Science” take is then perhaps little more than chosing who’s word he takes on it.

    And also, how far did you *research* this yourself before you decided that if it wasn’t in the specific quote that it was an Emily-Latella-like moment?

    What I like is that you go on assumption, and that the guy who links you thinks that because some anti-ID-er wrote a blog piece on this, it’s good enough to post a bare link. And the documentarian thinks that he understands what Miller is talking about. Of course, you all have deniability too. The linker could say, he didn’t know that you apparently didn’t watch the documentary. And you can say that you didn’t see the specific attribution in Luskin’s piece, so you didn’t know how he linked Dembsky. And of course, the documentarian could be unreliably framing Miller’s words.

    But how are you guys to find comfort in a chain of intellectual errors? I just want to laugh.

  • 2007/03/25 at 11:37 pm


    No, Miller and I are not wrong, at least concerning the “mathematical trick” mentioned. Dembski does make the simplistic error noted, even though elsewhere he does argue that specification should be able to make a difference in the way we look at a specific argument. Simply because Dembski does somewhere, sometime discuss specification does not mean that in every instance he has actually deployed it. And I’ve given an instance where he did not (section 5.10 of NFL), one that I know you are aware of.

    Nonetheless, the way in which the documentary is reported to have handled this is poor. If Dembski’s ideas were to be a topic, then they should have interviewed Wolpert, Shallit, Perakh, Rosenhouse, Wilkins, Sober, Fitelson, Stephens, Chiprout, or you or I, because any of us could have presented Dembski’s arguments including specification and then shredded them.

    If you haven’t heard other major ID advocates pushing the “evolution is too improbable” argument, I’d suggest that you haven’t been listening. Johnson’s “Darwin on Trial”, IIRC, is where he discussed how improbability arguments made at the 1966 Wistar conference put evolutionary biology in doubt. Stephen C. Meyer, director of the DI CSC and another high-profile ID advocate, has been pushing some form of improbability argument since at least when I met him in 1997, c.f., “Consider the probabilistic hurdles that must be overcome to construct even one short protein molecule of about one hundred amino acids in length. (A typical protein consists of about 300 amino acids, and some are very much longer).”[The Origin of Life and the Death of Materialism] Michael Behe’s entire edifice of “irreducible complexity” is premised upon improbability argumentation, without the additional prop of Dembskian specification. Witness the Behe and Snoke paper from 2004 in Protein Science. In 1996, David Berlinski, another high-profile ID advocate, opined,

    The very problem that Darwin’s theory was designed to evade now reappears. Like vibrations passing through a spider’s web, changes to any part of the eye, if they are to improve vision, must bring about changes throughout the optical system. Without a correlative increase in the size and complexity of the optic nerve, an increase in the number of photoreceptive membranes can have no effect. A change in the optic nerve must in turn induce corresponding neurological changes in the brain. If these changes come about simultaneously, it makes no sense to talk of a gradual ascent of Mount Improbable. If they do not come about simultaneously, it is not clear why they should come about at all.

    I think that I’ve got the ID crew from the 1997 “Firing Line” debate covered. Is that high-profile enough, or not? I’ll add another…

    I desire no greater certainty in reasoning, than that by which chance is excluded from the present disposition of the natural world. Universal experience is against it. What does chance ever do for us? In the human body, for instance, chance, i. e. the operation of causes without design, may produce a wen, a wart, a mole, a pimple, but never an eye. Amongst inanimate substances, a clod, a pebble, a liquid drop might be; but never was a watch, a telescope, an organized body of any kind, answering a valuable purpose by a complicated mechanism, the effect of chance. In no assignable instance has such a thing existed without intention somewhere.

    I’ve edited one word in the above quote that would otherwise likely give away the source. Take a guess as to where that came from.

    I can certainly take my lumps as needed. I even have comments turned on so that folks like yourself can deliver them as needed, and you might note that your comment is actually seen here. Try being as snarky toward various and sundry ID advocates on their blogs and fora; I think that you’ll find a different reception.

    Have a laugh while you can, though; whether you count Miller’s off-the-cuff criticism as valid or not, there are a variety of more formal critiques of various “mathematical tricks” employed in “intelligent design” that are likely not to be as amusing for you. Something I find endlessly amusing is that I’m an author on a peer-reviewed paper on the topic of “intelligent design”, which means that I have one more such paper on the subject than does William Dembski.

    And when we look at it, what I was wrong about was whether Casey Luskin had some reason to go after Ken Miller. It turns out that he was given a reason by the poor editing of a documentary crew, but that the reason was spurious. They made Miller appear to be addressing something that he was not addressing. As documented above, major ID advocates, several of them not named “Dembski”, have made improbability arguments against evolution that are unsophisticated, having been taken from the creation science playbook. The unsophisticated “mathematical trick” really is a part of ID argumentation.

    So, my moment is a simple “mea culpa”; Casey Luskin did have a reason to spout off. The error that allowed him to do so lay with the documentary maker. I do not have a “Never mind” Littella moment here, because ID utilizes both unsophisticated “mathematical tricks” as well as more erudite “mathematical tricks”. We are right to be mindful of that.

  • 2007/03/26 at 3:32 am

    Wesley, you’re missing the point. Let me repeat what I wrote above: “First, the criticism implied by Miller’s example is not that ID advocates are using uniform probability distributions when those are inappropriate. The implied criticism is that they are looking only at the probability of the specific observed outcome, and not at a larger class of possible outcomes.”

    Of course I’m not saying that ID advocates don’t use probability arguments. I’m saying that they don’t generally use the specific probability argument implied by Miller’s statement. The first sentence of Miller’s statement is vague, and could perhaps refer to a “tornado in a junkyard” style argument, which makes the mistake of considering only purely random combination instead of taking non-random natural processes into account. When you say above that “Dembski waves away any consideration of evolutionary pathways in originating the E. coli flagellum,” you are clearly thinking of this type of argument. But Miller’s example involving playing cards cannot possibly be criticising such an argument. A deal of playing cards _is_ a purely random process, so it would be perfectly correct to model it as such. Yet Miller implies that the “goodness, how improbable this is” argument is invalid in this example. So he must be thinking of some other error. His objection is clearly to the fact that the argument considers only the probability of getting the exact same outcome that was observed (“we would never ever deal the cards out in this exact same fashion” and “nonetheless you got the hand that you did”). And this is precisely the problem that Dembski’s notion of specification attempts to address.

    In section 5.10 of NFL, he attempts to address this issue through his use of “perturbation probabilities”. True, he is typically vague there, and does not explicitly link perturbation probabilities with specification. But he does hint at what he is doing when he writes: “Indeed, how can we ever figure out all the possible arrangements of building blocks that fulfill some function?” His objective is to calculate the probability of getting any arrangement of components that could have fulfilled a given function, not just the specific arrangement observed in practice.

    I’m glad to see Miller’s statement that he was _not_ referring specifically to Dembski (see Jeremy’s link above), and that this implication arose from bad editing by the BBC team. The editing also gave Miller’s comment undue prominence, suggesting it was his primary response to the probabilistic arguments of ID, when it was actually just a response to one particular argument (or “trick”), and not necessarily a major one. Nevertheless, I would still say that it is misleading to attribute this argument to “intelligent design” generally. (You have not yet given an example of a leading ID advocate making this argument, though I dare say you may be able to find occasional instances.)

  • 2007/03/26 at 7:55 am

    Thinking some more about this, I started to have the feeling that I may have seen an argument of this sort in the Behe & Snoke (2004) paper. I’ve just checked and it seems this is in fact the case. I haven’t re-read the whole paper, but towards the end they write:

    “On the other hand, because the simulation looks for the production of a particular MR feature in a particular gene, the values will be overestimates of the time necessary to produce some MR feature in some duplicated gene. In other words, the simulation takes a prospective stance, asking for a certain feature to be produced, but we look at modern proteins retrospectively. Although we see a particular disulfide bond or binding site in a particular protein, there may have been several sites in the protein that could have evolved into disulfide bonds or binding sites, or other proteins may have fulfilled the same role. For example, Matthews’ group engineered several nonnative disulfide bonds into lysozyme that permit function (Matsumura et al. 1989). We see the modern product but not the historical possibilities.”

    This appears to be just the sort of argument that Miller was critising. It is to their credit that Behe and Snoke point out the fallacy themselves, but since they do nothing to overcome this problem and press their claims regardless, it is perfectly reasonable to criticise them for it.

    Since this argument is made by as prominent an ID advocate as Behe, and not in some off-the-cuff remark but in a peer-reviewed paper which has been widely touted by the ID movement, I now think Miller’s remark in the TV programme was quite justified, and I retract my criticism of Miller (but not of the BBC or of Wesley re Dembski; sorry, Wesley).

  • 2007/03/26 at 1:39 pm


    You’re missing one of my central points: the documentarian frames the “War on Science” when he can’t understand what criticism is directed to what, or whether Miller *ever* addressed Dembski. Where in normal circumstances, evolutionists tend to rail about amateurs butting their noses in. You don’t tend to do that when it publicizes your side.

    I already considered the possibility that he took Miller wrong. That’s conceivable from the number of mistakes that the documentary makes. The way that it does not actually want to address the deed in which the Dover school stepped over the line. And those are stated by the narrator. I’m guessing was meant to be a layman’s brief on the ID vs. Science debate.

    As for your mea culpa. That’s fine, and it’s noted. But the Emily Latella crap was quite a bit over the top in the first place. For example, we pretty much knew that nobody was campaigning for “Eagle Rights” which is what makes her rant about it’s “popularity” obvious buffoonery. However all we have to do is listen to the 5 seconds before Miller speaks to see that something was claimed.

    Again, I don’t trust the documentarian that much, so it’s no big deal to me, but Miller’s somewhat non-committal response gives me not a whole lot of confidence that he remembers the exchange better than the interviewer.

  • 2007/03/26 at 4:51 pm

    I think Wesley is correct. If I recall correctly, his only hint as to flagellum’s specification was his statement that “biological specification always refers to function.” Given that the function of the flagellum is locomotion, he should have been calculating the probability of the emergence of any locomotive mechanism, but instead he calculated the probability of the E. coli flagellum specifically. True, he did note that the flagellum is specified, but every event is trivially specified by the description “event”. So the concept of specification made no contribution to Dembski’s analysis.

    But even without this specific example, I think that the problem pointed out by Miller applies to Dembski’s method in general. The problem is that a hypothesis cannot be eliminated on the basis of a single probability calculation. In Miller’s example, the probability of the deck being dealt that way should have been compared to all other possible outcomes. Since all outcomes were equally improbable, there was no reason to reject the chance hypothesis solely on the basis of a given outcome.

    But what if all of the cards were dealt in perfect order, starting with aces and ending with kings? In that case, it makes sense to compare the probability of that occurring by chance with the probability of it occurring under a different hypothesis, e.g. someone forgot to shuffle. Since the latter hypothesis confers a much higher probability on the event, it should be preferred over the former (assuming that its prior probability isn’t too low).

    Dembski commits the same fallacy of trying to eliminate a hypothesis based on a single probability calculation. Suppose Dembski observes an event with a specification T and he comes up with a chance hypothesis H that includes all possibilities except for design. Suppose he takes into account all specificational and replicational resources, and calculates that the occurrence of any event as specified as T is improbable under H. How does he infer design from that fact?

    The valid way to do it would be to note that the probability is higher under a design hypothesis than under H. Using likelihood reasoning, design is the better hypothesis.

    But Dembski doesn’t do it that way. He insists that design hypotheses do not confer probabilities, so he commits the fallacy that Miller talked about: He rejects H solely on the basis of a single low probability under H.

    Sober pointed out this fallacy years ago. Said he, “Dembski’s talk of a ‘probabilistic inconsistency’ suggests that he thinks that improbable events can’t really occur.” Richard has also made this point. When X occurs, you can’t eliminate H just because P(X|H) is low. You have to look at competing hypotheses or other possible outcomes.

  • 2007/03/27 at 6:46 am

    The card analogy is a bad counterargument for a bad argument. It implies that the argument is wrong because most of the possible outcomes are suitable for life (or eye or whatever); this is simply not true. While more than one might be suitable, most still aren’t; and that’s enough for the probability argument to work. (The real error in the argument, as Richard already said, is that it presupposes an uniform distribution of possible outcomes.)

  • 2007/03/27 at 8:29 am

    “Take a guess as to where that came from.”


  • 2007/03/28 at 4:43 pm

    “Thinking some more about this, I started to have the feeling that I may have seen an argument of this sort in the Behe & Snoke (2004) paper.”

    Given that I wrote above,

    Michael Behe’s entire edifice of “irreducible complexity” is premised upon improbability argumentation, without the additional prop of Dembskian specification. Witness the Behe and Snoke paper from 2004 in Protein Science.

    there is at least one recent source reminding you of this.

  • 2007/03/29 at 2:28 am

    Mea culpa, Wesley. Perhaps that’s what jogged my memory and made me go back to that paper.

    I’ve admitted I was wrong re Miller and Behe. Can you admit you were wrong re Dembski?

Comments are closed.