Monthly Archives: October 2009

Fuller: Never Say Nevermore

Steve Fuller, persevering exponent of “affirmative action” for “intelligent design” creationism, really let loose in his anti-eulogy for the recently deceased Norman Levitt. You have to read the comments over there, though they tend to be considerably blunter than I would be comfortable making.

I did end up leaving a comment there, though. I got to thinking about other famous examples of anti-eulogy, and the first association I had was Rufus Griswold, whose published notice of Edgar Allan Poe’s death started with, “Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it.” The analogy seemed quite apropos.

Here’s my comment from over at Fuller’s weblog:

Prof. Fuller appears to have selected for himself a role of Rufus Griswold, as Fuller has himself labored to achieve a legacy of “voluminous worthlessness” while railing at the dead.

Most of humanity labors to attain simple competence, and few can hope that they will be long remembered for their intellectual contributions. For Griswold, the reviewer appears to have had the word with staying power:

“What will be his fate? Forgotten, save only by those whom he has injured and insulted, he will sink into oblivion, without leaving a landmark to tell that he once existed; or if he is spoken of hereafter, he will be quoted as the unfaithful servant who abused his trust.”

Trouble at Butler U.

“Academic politics is much more vicious than real politics. We think it’s because the stakes are so small.” — various

The administration at Butler University has been having trouble with the Zimmermans. Prof. Michael Zimmerman of the Clergy Letter Project and Evolution Weekend has a new contract… one that does not have him serving as dean of the College of Liberal Arts and Sciences. In addition, Butler U. Provost Jamie Comstock apparently said harsh words about Prof. Zimmerman that he is treating as defamatory. Also, Prof. Zimmerman’s wife, Prof. Andrea Gullickson, was Chair of the Department of Music, but was first stripped of the chair and then threatened with dismissal from her faculty position.

That’s the usual run of academic politics, and something that would likely not have hit anyone’s radar in the normal course of events. But then we get to Butler U. and the third Zimmerman. This one is Jess Zimmerman, currently a junior enrolled in courses at Butler U. Jess has been pretty understandably upset about the treatment his father and stepmother have received at Butler U. Jess, though, did more than be upset: he blogged about the situation, quoted emails about Gullickson’s treatment, and opined that the administrators at issue were bad news for the Butler U. community. The cherry on top so far as Butler U. was concerned was that Jess did this blogging anonymously.

While I don’t often partake of anonymous commentary, I think there are good reasons to use it. One of the best of those reasons is that of making it harder for petty tyrants to seek retribution. As a student at the institution being criticized, there are a great many ways that the wrath of an administration pricked by words can be unleashed. What has turned a local rumor and gossip circuit story into national news is the actual way the administration chose to wield its power: they filed a libel and defamation lawsuit against the anonymous blogger.

Then came the revelation of just who it was they were suing: a student and family member of faculty who were quite reasonably seen as victims of administrative power struggles. The suit has been withdrawn, but the outrage lives on. Besides the obvious issue that the lawsuit was frivolous (look at the supposedly defamatory comments), Butler U.’s lawyers were also not thinking about what they would open the school up to in terms of the discovery process. Think that stuff like what happened to Profs. Zimmerman and Gullickson occurs in an absence of high-level communication? Think again. The odds are long that further embarrassment would have been avoided if a lawsuit went forward.

Now, though, Butler U. administrators still want to “punish” Jess Zimmerman. Having denied Jess his day in court, Butler U. is offering to provide him his day in kangaroo court, via an unspecified set of punishments chosen for miscreant students. Further, in discussions over Prof. Zimmerman’s own legal claims against Provost Comstock, the university lawyers sought to make it a condition of settlement with Prof. Zimmerman that Jess give up any right of appeal and submit to any (thus far undisclosed) administrative sanctions against him. Prof. Zimmerman quite rightly refused to make any such deal. The cases are separate, and the attempt to join them is nothing better than extortion.

I’ve seen various comments that try to defend the Butler U. administration. I’m afraid that the more I read, the lower my opinion of the Butler U. administration drops. One expects that in complex cases, there will be points that go to favor one and the other side. This situation, though, seems thoroughly lopsided.

I will divulge here that I work regularly with Michael Zimmerman and consider him a friend. I’ve never met Jess but I wish him the best of luck getting through this trying time. Butler U.? If they wished for my advice, they’d give up trying to take out their frustration on a student over having their dirty laundry aired. Nobody who looks at the record is buying the various rationalizations for the vindictiveness, guys.

“The Daily Show” Chides CNN

The Huffington Post has an article up concerning fact-checking.

Saturday Night Live had a sketch where someone playing the role of President Obama claimed that he couldn’t have been bad for the country as rightwingers claimed, since he had actually done nothing while in office. CNN then did a story about fact-checking the SNL skit.

Jon Stewart at “The Daily Show” then took up the futility of CNN getting all bent out of shape over a certain artistic license in an SNL skit script when they had a number of interviewees who simply delivered false information without CNN bothering to find out whether those people had their facts straight.

It’s pretty sad when one is finding over and over again that a major bastion of journalistic integrity nowadays is itself a comedy show.

“If You Get Too Churchy, She’ll Tell You”

That’s the headline on small story in the St. Petersburg Times, talking about Tampa City Council member Linda Saul-Sena. Saul-Sena has told various and sundry people giving invocations before the council if they stray into inappropriate sectarian territory. Apparently this doesn’t sit well with Jim Crew of the City Clerk’s office, who complained about those leading prayers being “publicly chided and humiliated”.

Saul-Sena says she will continue to remind speakers of the separation of church and state.

Please do, Ms. Saul-Sena.

Mr. Crew, if people giving invocations conduct them in a way that is not exclusionary, I think Ms. Saul-Sena would give them the benefit of the doubt. But sectarian exclusivity is antithetical to the practice of a representative democracy, and those who promote sectarian exclusivity at a secular government function need to be reminded that what they are doing is not in the best interests of the public. If that “humiliates” them, well, that’s only what they let themselves in for.

A slightly different text of the article appears online.

Expert Witness and Manuscript

Over at the Chronicle of Higher Education, they have an article about a lawsuit between Robert N. Proctor and the tobacco industry. The details are scanty in the part of the article that is not behind the subscription barrier, but Proctor serves as an expert witness in cases dealing with the tobacco industry. His opponents want to subpoena a manuscript of Proctor’s, and Proctor wants them to wait for publication like everybody else.

Since I don’t have the full article, I’ll have to state this conditionally. If Proctor has refrained from mentioning the forthcoming book in his expert reports, depositions, and testimony, he should prevail and be able to keep the manuscript to himself until publication. If not, though, his opponents have a right to have a look at the material that has been cited as part of what makes Proctor an expert.

This was seen back in the Kitzmiller v. Dover Area School District case, when William Dembski famously bragged about his qualifications, including those of editing the then-forthcoming textbook from the Foundation for Thought and Ethics, The Design of Life. The plaintiffs issued a subpoena for the unfinished manuscript and got it. Experts have to tread carefully if they wish to keep particular parts of their work private, and it all depends on how well they have done so as to whether they get to hold tight, or are forced to cough it up.

Gun Possession and Assault With a Firearm: Risky Stuff — Or Not?

Over at Greg Laden’s blog, he has notice of a study done via epidemiological procedures to look at the relationship between injury and gun possession. The paper is titled “Investigating the Link Between Gun Possession and Gun Assault “.

Source: Branas, C., Richmond, T., Culhane, D., Ten Have, T., & Wiebe, D. (2009). Investigating the Link Between Gun Possession and Gun Assault American Journal of Public Health DOI: 10.2105/AJPH.2008.143099

The researchers concluded,

On average, guns did not protect those who possessed them from being shot in an assault. Although successful defensive gun uses occur each year, the probability of success may be low for civilian gun users in urban areas. Such users should reconsider their possession of guns or, at least, understand that regular possession necessitates careful safety countermeasures.

I started writing a comment at Greg’s blog, but it just kept going and going, so I’m making a post here instead.

I will start by saying that after looking it over pretty carefully, I am unimpressed with this study. It appears to turn a non-significant result about gun possession into headlines by a series of steps that cannot be replicated from the description in the methods.

I’ve read the full paper, and the logic or math is incompletely described by which they arrived at even their numerical results, much less the further conclusions that they take. Of course, I’ve had only a brief time where I was engaged in epidemiology research, and that was over twenty years ago, so I’ve done a bit of review, too. They used a case-control experimental design. Because most of their data is nominal, not numerical, they employ “conditional logistic regression”. They mention models, but provide no descriptions of the particulars of the models, nor any parameters. (cf. the section on “Statistical Procedures” in the Methods section, where they describe a series of models and regression analyses, but without specificity.) As far as I can tell, even if someone else had a similar dataset to work with, they would not be able to fully replicate the procedure taken in this paper simply from the paper’s description of its methods. The journal site does not have a link for supplemental materials, so there does not appear to be any more extensive description of data, methods, or results than is present in the paper itself.

Their dataset is only comprised of shooting incidents. This limits what question they may reasonably be said to have addressed, as I will go into more detail about later.

this in mind, we conducted a population-based
case–control study in Philadelphia, Pennsylvania,
to investigate the relationship between being
injured with a gun in an assault and an individ-
ual’s possession of a gun at the time. We included
both fatal and nonfatal outcomes and accounted
for a variety of individual and situational con-
founders also measured at the time of assault.


Even after these
exclusions, the study only needed a subset of
the remaining shootings to test its hypotheses. A
random number was thus assigned to these
remaining shootings, as they presented, to enroll
a representative one third of them.

If this struck you as an strange dataset to base conclusions about the odds of being shot being modulated by having a gun in your possession at the time of an attack, you are not alone. I’ll run through what the case-control method does. A case-control analysis seeks to determine how strong an association there is between a risk factor and an outcome. In epidemiology, this is put in terms of exposure to a risk factor and whether the person has a disease (the outcome). Cases are the instances of people with the outcome/disease. Controls are people without the outcome/disease. The standard way to split things up is in a 2×2 table.

Risk Factor Cases Controls
Exposed A C
Unexposed B D
Total: A+B C+D

Given numbers to put in for A, B, C, and D, one can then compute an “odds of exposure ratio” for cases (A/B) and controls (C/D), and an “odds ratio” for the risk factor as (A*D)/(B*C). The overall “odds ratio” is an estimate of the relative risk people who are exposed to the risk factor are at. If a higher ratio of people exposed to the risk factor have the disease/outcome than is seen in the ratio of people not exposed to the risk factor, then the odds ratio is greater than 1 and indicates an association of the risk factor with the disease or outcome. If it goes the other way, the ratio is less than 1 and likewise is an association, but indicating that the risk factor is somehow correlated with reduced incidence of the disease or outcome. (Description above based on the fine page here.)

The authors of the present paper were pleased to note that some early work tying smoking as a risk factor to lung cancer was done with case-control analysis. I wasn’t able to grab the early paper cited, but I did find a CDC worksheet that claimed to pass along their numbers. I’ll put that here for an example. It will be laid out somewhat differently, since the work showed multiple categories of exposure, so the “unexposed” numbers I will put in the top row.

Cigarettes/Day Cases (had lung cancer) Controls (did not have lung cancer) Odds Ratio
None 7 61 NA
1-14 565 706 6.97
15-24 445 408 9.50
25+ 340 182 16.3
All smokers 1350 1296 9.08

OK, that lays out how a historic use of the case-control method was done, and in a fairly simple way. Part of the reason the smoking/lung cancer study was so influential, though, lay in the categorization with amount of tobacco use, which goes some way toward showing a dosage-response relation on top of the lumped odds ratio of a 9x relative risk of lung cancer with smoking. One reason the case-control method worked for this was that in the smoking/lung cancer association, there was a strong signal in the data. That comes through in the magnitude of the relative risk values. Keep that in mind as we come back to consideration of the present paper.

The present study is in no way simple. Nor are its methods as given clear enough to replicate. There is, though, a table (Table 1) summarizing the basic numbers they started with in the study. It is sufficient to get the raw odds ratios for many of the conditions that they took note of or adjusted for in the study. The remaining ones are given in units that don’t permit easy calculation in terms of ratios. So here are the odds ratios based on the unprocessed, unadjusted numbers from Table 1. (Note: I’ve had to do a lot of transcriptions by hand, so there could be typos or worse, errors. I’ll fix them as they are pointed out to me.)

Odds Ratios
Risk Factor All Shootings Fatal Shootings Chance to Resist
Gun possession 0.816 1.13 1.13
Alcohol involvement 2.23 1.97 2.59
Illicit drug involvement 1.56 6.12 1.02
Being outdoors 49.5 23.8 43.3
Race: Black 1.00 1.04 1.02
Race: Hispanic 2.12 1.87 2.22
Gender: Male 1.03 0.980 1.03
Occupation: Professional 1.15 0.903 1.21
Occupation: Working class 0.521 0.629 0.507
Occupation: Unemployed 1.82 1.77 1.77
Occupation: High-risk 2.50 1.37 3.03
Prior arrest 1.92 2.14 1.89

One thing that stands out as a huge difference between the case group and the control group is location. The case group (the folks who got shot) were outdoors in 83% of non-fatal shootings and in 71% of fatal shootings. The control group, by contrast, were outdoors at the same time only 9% of the time. That translates into very high relative risk, about 50x the control group by simply going outside. If the authors wanted an attention-grabbing lead, the “being outdoors” risk factor is what they should have played up. “Don’t Go Outside” could have been the headline, validating Philadelphia agarophobics.

For myself, I’d have expected that a dataset to do what was reported would have included assaults with a firearm that either resulted in a shooting of the victim or did not result in the shooting of the victim, and do whatever “adjusting” magic was needed to control for different conditions between those two cases to find the effect of the victim being in possession of a firearm at the time. But this is definitely not the way they’ve described their data and methods.

The fact that exactly the class of assaults where a victim had possession of a firearm and was not shot was excluded from this study would seem to me to be a large methodological hole in the research. That makes the following statement from the discussion all the more bizarre:

Although successful defensive gun uses can
and do occur,33,57 the findings of this study do
not support the perception that such successes
are likely.

But it seems to me that this was precisely the question that was at issue, and the choice of methodology reduced the authors to this flabby and, so far as I can tell, poorly substantiated claim. Again, maybe I’m missing something here, so if someone could clarify this, I would appreciate it.

Another mysterious item from the discussion:

tively, an individual may bring a gun to an
otherwise gun-free conflict only to have that gun
wrested away and turned on them.

Why is it that this is only stated as a possibility, and not actually quantified? Surely this would have come out in the investigation of the non-fatal shootings, and also in a part of the fatal ones, and could have been presented at least as a raw number, if not analyzed as well to reduce the confounding factors. Given the full pool of over 3,000 shooting incidents that they took their dataset from, this should easily have been characterized as a proportion of the dataset at the very least.

There is no simple relation between the relative numbers and the resultant risk factors that are reported. Since case-control studies are supposed to work by highlighting how a treatment group differs from a control group, it is problematic to see just how the small, non-significant differences between these groups in the particular character of interest get expanded into the fairly large risk factors given in the paper. The raw, unadjusted numbers don’t paint gun possession as being generically risky; in fact, across all cases, it shows a slight association with less relative risk. So it is only through the opaque method of adjustment of confounding factors that the startling relative risk estimates for gun possession come about. That process has a lot to do with how the models mentioned are constructed, and that information is not available via the published paper.

If we assume that all “adjustments” are made to the control data, we can estimate just how much “adjustment” had to occur in order to arrive at the published relative risk numbers for the “gun possession” condition in the three different contexts. I’ll lay this out in a set of tables, too.

All Shootings
Case (%) Control (raw %) Raw Odds Ratio Adjusted Odds Ratio Est. Adjusted % Adjustment Factor
5.92 7.16 0.816 4.46 1.39 0.195
Fatal Shootings
Case (%) Control (raw %) Raw Odds Ratio Adjusted Odds Ratio Est. Adjusted % Adjustment Factor
8.8 7.85 1.13 4.23 2.14 0.272
Chance to Resist Shootings
Case (%) Control (raw %) Raw Odds Ratio Adjusted Odds Ratio Est. Adjusted % Adjustment Factor
8.28 7.37 1.13 5.45 1.63 0.221

These tables show that the models used for “adjustment” of the raw data, if applied to the “control” raw data, would have reduced each raw control value for “gun possession” to about one-fifth its original value. This seems pretty aggressive for a model only described in general terms. This information reinforces the point that the startling and headline-grabbing statements about “gun possession” being risky are not founded upon the raw data, but instead come from the model. In order to put trust in the findings, the model needs far more transparency than this paper delivers.

Going back to the conclusions given:

On average, guns did not protect those who possessed them from being shot in an assault.

It seems beyond the scope of the study to say anything about not being shot in an assault with a firearm. The question that the research addresses is limited by the data that was considered, and what the chosen dataset can address is the question, “Is the general population much different in the characteristic of ‘gun possession’ from a sample of people who were shot in an assault with a firearm?” The raw data clearly says “No, there is no significant difference between the groups,” while the model apparently gives the opposite answer with statistical authority. This should cause introspection on the part of the researchers, not broad statements claiming to have established a basis for a sweeping change in perception of the issue.

Methodologically, comparing gun assaults without shootings to those resulting in shootings could have addressed the issue of whether gun possession changed the risk of being shot in an assault, but no such comparison was attempted. This seems distinctly odd, since the research logistical burden would have been lower with that approach, and reducing the research burden was specifically mentioned as a reason for adopting the case-control approach. If the risks estimated with the adjusted data were accurate, comparing gun assaults without shootings to those with would have provided a simple way to independently corroborate the finding. On the other hand, there would be relatively little cause for “adjustment” of the data between those two cases, and it may not corroborate the modeling effort undertaken here. If the Philadelphia police could be prevailed upon to provide a set of numbers from gun assaults without shootings concerning the total number of such cases and the number in which the victim possessed a gun, it would be sufficient to use as the “control” group and provide an odds ratio for comparison to that in the published paper.

Although successful defensive gun uses occur each year, the probability of success may be low for civilian gun users in urban areas.

This is entirely speculative and nothing in this study addresses any quantification of successful defenses against assaults with firearms, much less anything that would rise even to a guess about probability.

Such users should reconsider their possession of guns or, at least, understand that regular possession necessitates careful safety countermeasures.

I think better safety countermeasures is a goal we can all get behind, but I didn’t need a study with opaque and not obviously functional methodology to tell me that. As to reconsidering possession of firearms, I think I’ll take that with a grain of salt until someone is able to explain how that conclusion really follows from the data and methods described. Right now, it looks like a big non sequitur leap to me.

As noted above, it seems that there is a fairly simple way to check the model against reality. If that is done and it validates the model, I’d be somewhat surprised, but I’d be satisfied on the methodological issues that a real result had been obtained. But in the absence of either a transparent model permitting replication of results or the independent check I outlined above, my impression of the study is that it is more a means for assumptions to be converted into conclusions than a solid piece of empirical work.