Monthly Archives: May 2011

Magical Mathematical Metrics: Intelligent Design Hijinks

Over at the “Uncommon Descent” blog, poster “niwrad” decided to dispute claims of high sequence similarity between human and chimpanzee genomes. “niwrad” posted a statistical test of human/chimp genome comparisons in September, 2010, and a follow-up post this week comparing two human genomes using the metric from the earlier post. These were brought to my attention by “CeilingCat” on the AtBC forum. What the pair of posts demonstrates is another instance of “intelligent design” creationism advocates engaging in mathematical hijinks. The “niwrad” performance has more to do with the style of illusionists than it does actual mathematical and statistical practice. What one has to do with these kinds of things is look for the sleight of hand. Because “niwrad” now has a pair of articles based on the same “trick”, it becomes easier to point out exactly where the prestidigitation happened and why it is reasonable to infer that “niwrad” knows full well that it is a trick.

Let’s review some highlights from “niwrad”‘s initial post:

Supporters of the neo-Darwinian theory of evolution have a strong ideological motivation for minimizing the differences between humans and chimps, as they claim that these two species evolved from a common ancestor, as a result of random mutations filtered by natural selection. Now, I don’t personally believe that humans and chimps share a common ancestry, for a host of reasons that would take me too long to explain in this post. Nor do I attach much significance to the magnitude of the genetic differences between these two species, per se, because in my opinion, the fundamental differences between these creatures lie elsewhere. [...]

[...] The comparison I performed was completely different from those usually performed by geneticists, because was purely statistical in nature. In a sense, it could be described as an application of the well-known Monte Carlo method. [...]

[...] While there is only one possible method of comparing identity between strings of characters (the above pairwise comparison), there are many methods of comparing similarity. In other words, there are many measures of similarity, depending on the rules of pattern matching that we choose. [...]

Any final result for a complete statistical similarity test (especially if it is a unique number) is meaningful only if: 1) the distance function is mathematically defined; 2) the rules for pattern matching and the formulas for calculating the result are explained in detail; 3) it is clearly stated which parts of the input strings are being examined; 4) in the event that computer programs were used to perform the comparison, the source codes and algorithms are provided. My explanations below have the goal to meet the three first constraints. To satisfy the fourth condition, the source file of the Perl script used for the test is freely downloadable here.

[...]

For each pair of homologous chromosomes A and B, a PRNG (pseudo-random number generator) generates 10,000 uniformly distributed pseudo-random numbers which specify the offset, or starting point, of 10,000 30-base patterns that are contained in source chromosome A. The 30BPM test involves searching for all 10,000 of these DNA sub-strings of chromosome A in our target chromosome B. Now let F be the number of patterns located (at least once) in chromosome B. The 30BPM similarity is simply defined as F/100 (minimum value = 0%, maximum value = 100%). The absolute difference between 10,000 and F (minimum 0, maximum 10,000) is the 30BPM distance. [...] It can easily be seen that the 30BPM distance will be zero (30BPM similarity = 100%) if the two strings are identical. In an additional test which I performed on two random 100 million-base DNA strings, the 30-BPM distance was 10,000 (i.e. no patterns on A were located in B). [...]

The results obtained are statistically valid. The same test was previously run on a sampling of 1,000 random 30-base patterns and the percentages obtained were almost identical with those obtained in the final test, with 10,000 random 30-base patterns. When human and chimp genomes are compared, the X chromosome is the one showing the highest degree of 30BPM similarity (72.37%), while the Y chromosome shows the lowest degree of 30BPM similarity (30.29%). On average the overall 30BPM similarity, when all chromosomes are taken into consideration, is approximately 62%. Here we have the classic case of the glass which some people perceive as being half-full, while others perceive it as being half-empty. When compared to two random strings which are 0% similar, 62% is a very large value, so nobody would deny that human and chimp genomes are quite similar! On the other end, 62% is a very low value when compared to the more than 95% similarity percentages which are published by bioinformatics evolutionary researchers. Now, I realize that it may seem somewhat arbitrary to choose 30-base-long patterns, as I did in my test, and indeed it is arbitrary to some degree. However, if the two genomes were really 95% similar or more, as is commonly claimed, also a 30BPM statistical test should produce 95% results, and it does not.

Emphasis added to “niwrad”‘s central claim.

The claim is, of course, poppycock. Anyone with the slightest pretension to an understanding of probability or statistics would recognize that the proposed “30BPM” metric is non-linear and not directly comparable to straight-up sequence similarity numbers. What’s truly ironic is that if “niwrad” were slightly more astute, he might have realized that his “30BPM” metric actually confirms the high sequence similarity results that he claims to have rebutted.

And that brings us to “niwrad”‘s second post, the one that aims to apply his “30BPM” metric to intra-specific genome comparisons, this time done as human-to-human comparison.

One reader suggested applying an identical test in order to compare two human genomes. That sounded like a very good idea to me, so I downloaded another human genome dataset from NCBI and performed a test.

[...]

Finally, the average number of pattern matches per chromosome, shown at the bottom of the table, was very different in the two cases: 9616 for human vs. human comparisons, but only 6173 for chimp vs. human comparisons. The average number of patterns without a match for human vs. human comparisons was (10000 – 9616) = 384, or in percentage terms, 384/10000 = 3.84%. The average number of patterns without a match in human vs. chimp comparisons was (10000 – 6173) = 3827, or in percentage terms, 3827/10000 = 38.27%, which is almost ten times greater.

So the bottom-line question is: if, as many evolutionists say, chimpanzee and human genomes are 99% identical, how “identical” are two human genomes?

“niwrad”‘s final question is interesting for the very salient reason that he did not provide an answer for it, even though his whole trick depends on the conceit that he has developed a better metric for quantifying sequence similarity than that used by actual geneticists. There is a reason why “niwrad” failed to answer, though, and that is that trying to claim that there is only 96.16% sequence similarity between two human genomes is manifestly risible. We know that the “trick” involved here is to confuse genetic sequence similarity with the “30BPM” metric, and that when faced with an obviously nonsensical outcome, “niwrad” punted rather than make explicit the full ridiculousness of his claim.

Above, I mentioned that “niwrad”‘s metric actually confirms high sequence similarity values. Here’s how that happens. First, one needs to realize that one doesn’t need “Monte Carlo” techniques to evaluate “niwrad”‘s “30BPM” metric: we can develop its properties with the usual probabilistic equations. The parameters of interest to us are the rate of change (C), the length of the analysis sequence (K), and the probability of a match (p). If we assume a uniform distribution of changes, then our model is simply the probability p that we do not observe a change within our analysis window K at a particular rate of change C. And that is simply expressed as

$latex p = (1 – C)^{K}$

Besides being simple, it is obviously also nonlinear. Notice that “niwrad” made quite a fuss about how his metric did what everyone expects for the endpoints of the distribution, where complete sequence identity happened and where complete randomness obtained. Notice that “niwrad” did not go anywhere near calibrating his metric against an expectation concerning a sequence with a known amount of similarity. There’s a reason for that, specifically, that one can’t blather about greater-than-expected dissimilarity if one actual calibrates the technique for known amounts of sequence similarity.

For example, what is the expected “30BPM” result when sequence similarity is actually 99%? We just solve the equation above to yield:

$latex p = (1 – 0.01)^{30} = 0.7397 $

Similarly, when sequence similarity is 99.9%, the “30BPM” expected result is:

$latex p = (1 – 0.001)^{30} = 0.970431 $

So, what about “niwrad”‘s “30BPM” numbers that he obtained empirically? We can convert those back into sequence similarity numbers, which are not the same thing as “30BPM” numbers at all. The equation is simply a rearrangement of the one above:

$latex C = 1 – \exp \left( \ln{p} \over K \right) $

“niwrad”‘s average “30BPM” value for the human-chimp comparison was 0.6173, giving a sequence similarity estimate of 0.984.

“niwrad”‘s average “30BPM” value for the human-human comparison was 0.9616, giving a sequence similarity estimate of 0.9987.

I should note that “niwrad”‘s “30BPM” metric becomes bloody useless at a point far short of completely random sequences. What point is that? I’m glad that you asked. Given a sample of 10,000 analysis windows, the threshold of usability would be when you have a 50% chance of seeing one match out of those 10,000 samples. That sets p at 0.00005 and gives C as 0.28116. That is, any sequence similarity of less than 0.719 will look exactly the same in “30BPM” terms and be ranked as having 0% similarity.

The “30BPM” metric deployment by “niwrad” does exactly what it was designed to do: exaggerate dissimilarity. It’s a magic trick intended to make an inconvenient fact disappear. It is a fundamentally dishonest exercise.

Update: Fixed the discrepancy between the symbols I defined and what I used in the equations. References to R should have been C, and now are.

<> 97046 10481 >

End-of-the-World Playlist

Over at NPR, there was a post asking people for the song to be played during the Rapture as predicted by Harold Camping. I gave several choices in my response:

For years, I’ve kept a directory of songs, just called “apoc”.

Top choices that are explicitly about the end of the world out of that directory would include:

“The Old Gods Return”, Blue Oyster Cult
“The Horsemen Arrive”, Blue Oyster Cult
“Black Blade”, Blue Oyster Cult

Ones that are evocative of end-of-the-world hopelessness or creepiness if not outright apocalypse would include:

“Silent Running”, Mike and the Mechanics
“No Way Out of Here”, David Gilmour
“Voices”, Russ Ballard
“Wings Wetted Down”, Blue Oyster Cult

And no list of end-of-the-world songs is complete without an homage to the people who keep saying it’s this time, for sure, really truly:

“Lunatic Fringe”, Red Rider

<> 94670 10979 >

Plotting a Dolphin Biosonar Click Train

I’ve been busy recently doing up figures for a paper on dolphin biosonar. One of the figures we ended up turning in earlier this week wasn’t exactly as I wanted it, but deadlines don’t wait. I put a lot of hours into trying to find alternative plotting for it, but just hadn’t found the right approach for an alternative.

Now that we’re done with that paper’s submission, I think I’ve found the approach to use in the future.

Here’s the problem: show the power spectral density (PSD) curves for all the clicks in a biosonar click train. What I was using years ago was my own code plotting a waterfall of PSDs on a bitmap. But I tied things too closely to the specifics of how I generated the PSDs, so for the 256-point FFT window I end up with each PSD’s width as exactly 256 pixels. That’s less than an inch for standard 300 dpi print resolution.

There are examples for “fence” plots in gnuplot and Python’s matplotlib, but I wasn’t able to get stuff that looked much better than up-res’d versions of my originals. Did I mention that I want to assign particular colors to each PSD in the click train?

Yesterday, I was thinking a bit more about the problem, and decided to look into Python’s matplotlib again, this time going from the demo code on using a PolyCollection, that is, a collection of arbitrary polygons. That is looking quite promising. Here is an example of what I’ve got so far going along this approach:

The shapes are nicely done, I like being able to set a transparency value, I can output to a scale and file type I specify, and I can assign a specific color to each PSD in the series. (The colors are randomly set in this demo.) About the only quibble I have with the whole thing is that I’d like to run the “Y” axis in the other direction, so that the earliest clicks are plotted at the back of the plot, and the most recent are in the foreground. It’s easy enough to flip around the list, but I haven’t yet figured out getting the numbering to run the wrong way.

About the particulars of this click train… the X axis is in kiloHertz units (kHz). There are 24 clicks in the click train. It is apparent that the click train shows variation in the spectral content and amplitude of clicks, with a ramp-up to high-amplitude and high peak frequency, and followed by diminishing amplitude toward the end of the click train. For the highest-amplitude clicks, one may notice that there is some energy at the very highest frequency bins. There was anti-aliasing applied in the recording setup, but it evidently was not entirely adequate to the task. The B&K amplifier used has built-in attenuation of -3dB at 200 kHz, IIRC. The B&K hydrophone, an 8103, has roll-off at frequencies that high. So, if anything, the magnitude of energy in the highest frequency bins shown here is underestimated. That the high-frequency energy is correlated with the high peak frequency, high amplitude clicks is an indication that this isn’t a general issue with background noise; this is part and parcel of the dolphin biosonar click output. There’s some research that Diane did with the UT ARL group on such high frequency components in dolphin biosonar that I’d like to revisit sometime soon.

Update: A handy page over at StackOverflow put me on course to flip my Y-axis numbers. I’ve also fixed up assigning colors that way that I want them, so now the result is looking much better to me.

The colors correspond to a classification based on spectral features (all things related to the FFT taken) first proposed by Houser, Helweg, and Moore in the late 1990s. I don’t process my transform in exactly the same way that they processed theirs, so the resulting classification is not necessarily identical to what they would have found if they processed the same click train. An extended discussion on that should be put off to another post.

Update 2: That was all too optimistic. There is a bug in “matplotlib”. Actually, if you look closely at the figure just above, the red polygon toward the back is plotted over a blue polygon, and it should not be. Depending on the view angle chosen, “matplotlib” gets the render order of polygons wrong. I was able to reproduce this error directly in the example code provided on the “matplotlib” website. Here’s the problem demonstrated:

I’m posting it here especially so that the “matplotlib” people can have a look. For my data and just 24 polygons, I can find angles where about a third of the polygons are rendered out of order. For other angles, everything renders properly. If you happen to like one of the correct-rendering angles, you can use the output. If the angle you want happens to be in the other range of incorrect-rendering, context does not seem to matter; no matter which direction you come to that view, it still renders incorrectly.

<> 88943 13536 >

The Synthese Editors-in-Chief Respond to a Petition

The main petition regarding the Synthese disclaimer published in the January, 2011 issue was signed by 470 academics. It asked for a retraction of the disclaimer and additional information about the circumstances that led the Editors-in-Chief (EiC) to include it.

The EiC have now provided a response to the main petition. I received no direct notice of this response; I ran across a post about it on the “New Apps” blog. Prof. Matthen, author of that blog post, noted:

As far as I can tell, this is a website with one item only. This is clearly a tactic to make the response as obscure and invisible as it can be.

To give some more detail on the apparent desire for obscurity, let me note that the web page as provided has only one piece of content, an image that shows the text of a response letter. Posting an image means that the text of the response is not made easily accessible and it is not indexed by search engines as text. (Interestingly, the page was generated out of Microsoft Word and includes metadata identifying Prof. Hendricks as the author of the piece signed by all three EiC.)

As for the text not being out there recorded for search engines and posterity, that is easy enough to fix. Here it is. I’ve transcribed it from the image at the link above. Any misspellings are likely mine.

In response to the petition sent to Synthese:

We have considered the demands contained in this petition very seriously. We have implemented a moratorium on new special issues and we have begun planning appropriate changes to the editorial procedures of Synthese.

The petition asks for full disclosure of all legal threats. There have not been any communications received from Christian philosophers that constituted legal threats. There was a single email from a member of the public expressing the view that the entire special issue was ‘scurrilous and libelous’. We did not consider this email to be a legal threat. It is important to note that this email was received after our initial contacts with Professor Beckwith.

As far as meaningful legal action is concerned, we have received messages that we take seriously as legal threats but these have not come from Christian philosophers. Our ability to provide detailed responses in the blogs is constrained by these challenges.

Professor Beckwith requested an opportunity to respond to Professor Forrest’s paper. We agreed that this was a fair course of action. As regards the inclusion of our editorial statement and the email correspondence with Professor Forrest, it is true that there was considerable discussion between the editors of all aspects of the special issue. We took these matters very seriously and as is often the case with serious deliberation there were some oscillations prior to our reaching a conclusion. Eventually the editors arrived at a shared position, in consultation with the publisher, based on what we judged to be the offending language in two papers.

With respect to the claim that the guest editors were given assurances that no editorial statement would appear, it is true that the guest editors were privy to internal discussions between the editors-in-chief at earlier stages. We were unable to properly communicate later stages of our decision-making process to the guest editors.

We are ultimately responsible for what appears in the journal and we decided to publish the special issue without amendment to any of its papers. We wish to emphasize that our editorial statement should in no way be interpreted as an endorsement of ‘intelligent design’.

At this point, we have a duty to help create procedures to prevent situations of the sort we saw here from recurring. Thus, in consultation with the publisher, we have begun planning a transition to improved editorial procedures and improved oversight which will be in place in 2012. We will work closely with our board or area editors and our advisory board to make this happen.

Johan van Benthem

Vincent Hendricks

John Symons

<> 87218 11140 >

Revisiting Code

Back in graduate school, I wrote tens of thousands of lines of Delphi code in support of research projects I worked on. Well, it is several years later, and my colleagues and I are getting back to the job of writing things up from those projects. And with manuscripts, one also has figures. A fairly urgent task for my spare time currently is working up requested revisions of figures that were originally produced almost a decade ago. I’ve pulled a couple of things into Python and used Matplotlib for figures, but many things I did with heavy tweaking of the Delphi TChart component, and the simplest path to revised figures for those still lies within Delphi.

While it isn’t exactly simple, the thing is that I can figure out where I was getting various things done. There is something to be said for Delphi’s Object Pascal language, where even with some years intervening and a distinct dearth of comments (yeah, mea culpa), I’m getting the gist of things in fairly short order. For one scatterplot, the original had a color progression that went with the time of each click being plotted, so each click was represented by a dot of a hue indicating its position in time in the click train. Well, that wasn’t wanted for print, so the request is for the same plot, but using a grayscale. The color progression doesn’t simply translate, so it was back to the code to re-do the thing in grayscale. I just finished that one up this evening. The Delphi 5 IDE holds up as a usable development tool, but I’ve gotten used to later-generation tools like Apple’s Xcode and Microsoft’ Visual Studio 2010, and it does look dated compared to those.

I do want to eventually have a library of Python classes that will work with the dataset, and I’ve made some progress on that score. I’ve used the ‘struct’ module to parse various files composed of binary Delphi records and used SQLite to stuff the contents into a database. I have a partially-completed Python signal processing script to tackle going through all the original signal data I have and apply various techniques that I simply didn’t have the compute-power before to try. Again, the sticking point is more that the time I can apply to any of these things is limited, given that so much remains to fix up in our fixer-upper of a domicile.

<> 69397 9171 >

Questions, Francis Beckwith, and a Tangible Absence of Answers

Yes, this is yet another bit related to the Synthese flap. One of the issues still outstanding is whether the list of things the Editors-in-Chief have as misconduct includes notifying third-party complainers that the disclaimer was going into the print edition long before the print edition was available. They certainly failed to inform either the guest editors or the authors that any such thing was happening; those people (I’m one of them) had to wait for print copies to appear on their doorstep to find out.

One piece of hard data is that Francis Beckwith, one of the third-party complainers, submitted his “response” to Barbara Forrest on February 7th, 2011, and the response includes in it explicit reference to the disclaimer in the print edition of Synthese 178:2. This sets the latest date at which Francis Beckwith could have been apprised of the disclaimer’s print status. I didn’t hear about it until Glenn Branch emailed me on March 9th, 2011, to say that a disclaimer had been printed. But I’d like to know exactly how much lead time Beckwith had. The Synthese Editors-in-Chief haven’t been very forthcoming when asked questions about this affair, so that leaves Beckwith to be asked about the situation.

So I asked. This is my email to Beckwith’s published Baylor University email address, sent on April 25th, 2011:

I first received notice of the disclaimer in the Synthese special issue printed edition on 2011/03/09. Would you please tell me the date when the Editors-in-Chief informed you that the disclaimer would be printed in the special issue? I know that this had to be prior to 2011/02/07 given the date of submission of your response that refers to the disclaimer, but I would like to be more precise about this matter.

Thanks,
Wesley

That seems pretty straightforward. It isn’t like it is even going against Beckwith’s interests to be forthcoming about answering it. Now, why would I expect an answer, given the context that I’m a known critic of “intelligent design” creationism and its current — and past — advocates? We got on OK at the 2006 Greer-Heard Forum event, for one. Well, and maybe because Beckwith himself has implied as much. Consider his posts over a previous interaction with Barbara Forrest:

[...] Here’s the problem folks: Barbara Forrest is not concerned about truth or justice. For if she were, she would have, at some point in her “unmasking of me,” contacted me to verify or check certain facts. She also would have given a complete account of certain events that when presented in that way do not “prove” anything odd. [...]

[...] Forrest correctly notes that I am no longer a DI fellow. Does she tell you why? No. How come? She never asked me. Why didn’t she ask me? You’ll have to ask her that. But I suspect that if she can’t find by using Google, she doesn’t bother checking.

[...] But did she ask me for the letter? [...]

[...] But she would have known that if….and here’s the clincher…she had asked me. [...]

[...] But Barb would have known this, if…and here’s the clincher… she had just asked.

It sure makes it sound like Francis Beckwith is a open and forthright kind of guy, even when corresponding with trenchant critics.

Which makes it rather puzzling why I don’t have an answer in hand yet, not even one of the “mind your own business” sort.

Maybe Beckwith is snowed under in emails and the first one simply got lost in the shuffle. So I sent a second one on May 4th, 2011:

On 4/25/2011 4:02 AM, Wesley R. Elsberry wrote:
> I first received notice of the disclaimer in the Synthese special issue printed edition on 2011/03/09. Would you please tell me the date when the Editors-in-Chief informed you that the disclaimer would be printed in the special issue? I know that this had to be prior to 2011/02/07 given the date of submission of your response that refers to the disclaimer, but I would like to be more precise about this matter.
>
> Thanks,
> Wesley

In the comments at

http://www.whatswrongwiththeworld.net/2009/05/stove_award_competition_heats.html

you note multiple times that Barbara Forrest could have asked you to clarify particular points, with the implication being that she would have received an answer to her question, had she but posed it.

Let me remind you that the question I asked above is still pending an answer. I would appreciate a response.

Thanks,
Wesley

I thought about other possible excuses, like being on vacation. If so, Beckwith has kept up with his blogging while not checking his email, which doesn’t seem exceedingly likely.

Given the continued lack of response, I am having to re-assess the likelihood that Francis Beckwith doesn’t get asked questions by critics because such questions simply go unanswered.

<> 82260 12416 >