Monthly Archives: March 2012

Python and the STFT

I’ve been going through biosonar data and while the SciPy specgram method is serviceable, I was interested in a short-time Fourier transform (STFT) implementation. There are a couple of ad hoc routines on Stack Overflow and the like, but I’ve started off with the Google Code PyTFD module. There are others out there as well, at least two projects including an STFT implementation are aimed at extracting time and frequency data from musical recordings. I may have a look at one or both of those at some point.

In any case, installing PyTFD involves downloading the code via Subversion and then running the setup.py script.

Since I spent more time than I think was absolutely necessary getting a couple of examples done with the STFT, let me run through an example in the hopes that helps somebody.

  1. # Imports
  2. from __future__ import division
  3. from pytfd.stft import *
  4. from pytfd import windows
  5.  
  6. import numpy as np
  7. import numpy.fft as nf
  8. import matplotlib
  9. matplotlib.use('Agg')
  10. import scipy
  11. import scipy.signal as spsig
  12. import pylab
  13. from pylab import *
  14.  
  15. # [...]
  16.  
  17.     w = windows.rectangular(8)
  18.     Y_stft = stft(clkdata,w)
  19.     extt = [0,Y_stft.shape[0]*1e-6,0,5e5]
  20.     pylab.imshow(abs(Y_stft)[Y_stft.__len__()//2:],
  21.                  extent=extt,
  22.                  aspect="auto",
  23.                  origin="upper")

OK, so there’s a fair amount of things to be imported along the way. The first three items (lines 2 to 4) are specifically for setting up access to PyTFD’s STFT method. Line 18 sets up the window function to use in the STFT. Line 19 actually does the work, getting the resulting multidimensional Numpy array with the STFT result given a Numpy array input and the window.
Line 20 sets up the extent array to express the size of the X range and the Y range covered by the STFT. Lines 20 to 24 puts the result in a subplot. There are some issues there. The STFT results are essentially a whole series of Fourier transforms, and those have both negative and positive frequencies, and are complex values to boot. So the “abs” function provides a magnitude for each point. The slice yields just the positive frequency range. Then the extent gets set to the range represented by the STFT. The “aspect” parameter is set to “auto” so that the X and Y ranges can be calculated separately by Matplotlib. The “origin” is set to “upper” to put the frequencies in the expected orientation.

Here’s a couple of the outputs:

<> 17394 6465 >

The Cattleman’s Sage Grouse Rant

An op-ed piece by Mike Deering, the National Cattlemen’s Beef Association Director of Communications, lays out an argument to let ranchers handle conservation of the sage grouse without involving the protection of the Endangered Species Act:

The wackos – as I still prefer to call them – have successfully weaseled their way to the front steps of BLM and the U.S. Forest Service. Late last year, the agencies released a plan to implement sage grouse protections on 45 million acres of federal lands with the goal of preventing the listing of sage grouse. While that’s a worthy goal, the plan fails to recognize that grazing is responsible for retaining expansive tracts of sagebrush-dominated rangeland, stimulating growth of grasses, eliminating invasive weeds and reducing the risk of wildfire. These services can only be provided by ranches that are stable and viable. Without grazing, sustaining and increasing the sage grouse population would be nearly impossible.

Grazing prevents fires. Fires cause death. Death equals barbecued chicken. It is that simple.

OK, let’s posit that Deering is giving it to us straight for a moment. What does he say next?

Ranchers stand ready to work with the government to prevent the listing of the sage grouse, which has the potential to put public lands grazing to a complete halt (according to Dave White, Chief of the Natural Resources Conservation Service, March 7, 2012).

Hmmmm. This doesn’t exactly inspire confidence that the NCBA is altruistically looking out for the best interests of sage grouse as a species. It sounds like a group that recognizes that a major resource may no longer be available to them and is taking steps to prevent losing that resource for their own use.

That line about “barbecued chicken” is an instance of rhetorical framing applied to sage grouse in the article.

“massive chicken barbecue”

“that barbecued chicken I mentioned earlier”

“the chicken debacle – officially called the greater sage grouse”

“ignore the chicken and set their sights on ranchers”

“not protect the chicken”

This isn’t just what passes for folksy charm in the NCBA. Likening sage grouse to chicken blurs distinctions between a native species in undisputed decline and a ubiquitous introduced domestic species. How could something that is called chicken deserve protection under law, after all?

Now lets drop the notion that Deering’s argument stands on its own. No, Mike, it is not “that simple” that ranching practices will produce a thriving population of sage grouse. The particular threat that Deering concentrates on, fire in sage habitat, is not always and everywhere a bad thing. Sage grouse need a particular mix of sage and other plants, and fire at a particular rate helps clear too-dense sage and restores a balance between cover and plants supporting forage for sage grouse. So a simple “no fires” policy is not a win for sage grouse.

Let’s have a look at another part of Deering’s rant:

I admit, those are some pretty inflammatory words. But these extremists deserve every ounce of it and I will back it up with one of many examples. Let’s hone in on that barbecued chicken I mentioned earlier. Extremists, for the most part, have refused any meaningful reform to the Endangered Species Act, which has resulted in a less than two percent species recovery rate over the past 40 years. Instead of looking at ranching as part of the solution, they spout rhetoric over facts. Look no further than the chicken debacle – officially called the greater sage grouse. Instead of working aggressively to prevent the listing of the sage grouse on the Endangered Species List, they are working aggressively to ignore the chicken and set their sights on ranchers. Say what? Yeah, their end goal is to end ranching; not protect the chicken.

Deering doesn’t mention here what, exactly, constitutes “reform” of the ESA. One might take it to mean specific things that would improve its record on the metric of “species recovery rate”, i.e., how often listed species become delisted. (A comment I’ve seen elsewhere notes that this is the wrong metric to use to evaluate the ESA; instead, one should look at the rate of extinction of listed species.) One would be wrong, though; the NCBA is on record with its list of proposed “reforms” to the ESA, and these have nothing at all to do with making the ESA more effective. They would, instead, guarantee less effectiveness of the ESA, putting in place automatic delisting criteria, providing exemptions that let certain classes of people off the hook for not following ESA regulations, placing even more burdens on those seeking to have a species listed, providing money to private property owners to implement policies, and adding logistical and paperwork burdens in the process of listing any species under the ESA.

I don’t know why activists would want to ‘aggressively prevent the listing of the sage grouse on the endangered species list’. Deering certainly doesn’t inform us as to why an activist should consider that a bad thing. Nor is the claim that protecting sage grouse is not the aim of people urging conservation supported in Deering’s rant by anything other than his assertion.

I’m not anti-rancher. But I am pro-sage grouse, and I think that preserving sage grouse is going to require more than stopping fires on grazing lands, which is the only thing I hear as a concrete policy coming out of the NCBA. The record of action on sage grouse conservation is a continual off-putting of listing as an endangered species, which is due to intense political action, not biological reality.

<> 9062 2692 >

Time Article on Coppedge v. JPL

Time’s web page has an article up by Jeffrey Kluger. Kluger is a lawyer and relates his reaction to the briefs filed in the case of David Coppedge v. Jet Propulsion Laboratory and Caltech.

Groups like the intelligent design community are not always free to pick their poster children, and it’s unfortunate for them that Coppedge is one of theirs. It’s true enough that employers and colleagues in a science-based workplace might be uncomfortable with the idea of a coworker who believes in intelligent design. But neither the Constitution nor employee-protection laws can regulate feelings — no more than they can or should regulate belief systems. They can, however, circumscribe behavior on both sides of that faith-divide. From the filings at least, JPL appears to have stayed well within those boundaries. Coppedge appears to have jumped the rails entirely.

Yes, even disinterested third parties get it now.

JPL’s brief discusses a lack of self-awareness on Coppedge’s part. The tone-deafness isn’t just Coppedge, though. It permeates the DI and the IDC community. They are so intent on instantiating their myths that they cannot seem to wrap their heads around the idea that one of their own could be in the wrong. You’d think with all those lawyers in their camp that they would be better at this than they are.

<> 10382 2541 >

Raspberry Pi: The Shopping List

I ordered a Raspberry Pi Model B computer from Newark, so now I’m waiting for stock to catch up with the truly phenomenal initial demand.

If you are wondering what the Raspberry Pi is, it is a small computer board based on a Broadcom System On Chip (SoC). The SoC is ARM-based, so the operating systems offered so far are Linux distributions. The board has a CPU, GPU, 256MB of Ram, an SD card interface, a USB host interface, audio output, Ethernet network port, and video output via composite or HDMI interfaces. And it costs $35.

The Raspberry Pi is the brainchild of a United Kingdom non-profit organization that aims to make a low-cost programming platform to re-invigorate interest in computer science among students. Since computers have turned into consumer devices rather than primarily being programming platforms, students don’t have a low-cost way to spark an interest in programming itself. Until, the Raspberry Pi folks hope, now.

But the Raspberry Pi is just now starting to be distributed in quantity. As the device comes, it is just a computer board a little bigger than a credit card. It doesn’t even have a case. So there is some shopping to be done to trick out your Raspberry Pi once you order it.

The first order of business is power. The way I’ve seen this discussed is to get a powered USB hub and micro-USB power cable. The Raspberry Pi’s power plug is a micro-USB interface. That port only hooks up power, so there’s no problem hooking that into a powered USB hub that you’ll also use for peripherals.

You’ll need an SD card and a downloaded image of an operating system to run. I see people talking about 4GB or larger SD cards. I found a SanDisk Class 4 8GB for less than $3:

Getting something on-screen requires either an RCA composite video card and monitor, or something that you can hook up via the HDMI output. I’ve got two DVI-equipped monitors here, so I’m looking to link my Raspberry Pi with a cable that goes from HDMI to DVI.

The rest of the items are peripherals that hook up via the powered USB hub discussed above. If you don’t have a USB keyboard and mouse, or if you prefer a trackball, or if you want to add audio recording capability, that all happens by adding USB devices. Here are some items I located:

I’ve put the Amazon links in the sidebar.

<> 8519 3083 >

Updating the Modular CV

Some time ago, I wrote about making a modular curriculum vitae in $latex \LaTeX$. Since that time, I’ve had to update the contents. Things change. Colleagues request current CVs to include in grant proposals, and given the current state of public sector employment it is no bad thing to have the CV ready to go.

But I’m now fighting a problem of separating content and presentation. There are different rules for formatting CVs and resumes, and I’ve done the wrong thing previously: I’ve copied and modified sections like employment history in order to change how the presentation happens. This is bad, because now any time I change something in my employment history, I need to make sure that every relevant copy gets changed. I needed some way to make it so there would be one and only one place where each piece of information would be kept, and apply that to different pieces of presentation code in the $latex \LaTeX$ source.

The solution I found today is the datatools module for $latex \LaTeX$. This is a module that allows one to generate, read, and manipulate data stored in CSV (comma-separated-values) files. There is a lot of functionality in the module that I’m not using yet, but the ability to get data out of CSV and format it as needed is a big step forward for me.

I’ve created two CSV files so far, one to hold my education data and another to hold my employment data. The CSV files have more columns than will often go into an output format. For example, my education CSV has columns for my advisor name and my thesis title, even those don’t appear anywhere in an output yet. This will allow me to keep all associated data together, whether or not it is currently used. Previously, I simply used comments to add this kind of information close to what it relates to in my $latex \LaTeX$ source.

I’m using various sources of good resume formatting to get ideas. Here’s the code to show my three degrees:

  1. \usepackage{datatool}
  2.  
  3. [...]
  4.  
  5. \def\dtledu{
  6. \vskip 0.125in  
  7. \dtlverbosetrue%
  8. \DTLloaddb{edu}{wreeducation.csv}%
  9. \DTLforeach{edu}{%
  10. \graddate=GradDate,\degree=Degree,\major=Major,\university=Institution,\place=Location}{%
  11. \noindent\textbf{\degree, \major:} \graddate, \university, \place\\
  12. }
  13. }

The “usepackage” line happens in the header. The “datatools” commands are only valid within the bounds of a document environment. I’m defining a macro “dtledu” to use in conditional statements. Within the macro, I skip an eighth of an inch down the page. I set the “datatools” package to emit a lot of debug information. The “DTLloaddb” command actually pulls in the contents of a CSV file. I first tried to use tab-delimiting, which is in the “datatools” documentation, but I couldn’t get it to work. I eventually went with all default formatting: commas for delimiters, and double-quotes for separating fields. That means that any text that has a comma must go in double-quotes.

The actual work happens in the “DTLforeach” command. It uses the data that was read in. One line holds the assignments from data from columns to macros. Then a block appears where I can use those macros in conjunction with $latex \LaTeX$ markup. Each line from my CSV is iterated over and formatted as I’ve defined it.

So this gives me a way to keep one place where my education information exists, and just one place for my employment information to exist. That information can be read in and formatted in different ways as needed for getting just the right output I’m looking for.

<> 11862 3709 >

ID and Science in the Dover Decision

I ran across a link to a blog post from 2007 by Jeff Shallit. One of the commenters there took exception to Jeff’s statement that the KvD case was primarily about religion, noting that a lot of the decision in the case discusses science. I was five years late to the party, but I felt that I needed to put my two cents in:

Sorry to have come across this so late. “analyysi” objects to the idea that the issue in Kitzmiller v. DASD was establishment of religion, saying that the decision discusses the topic of science a lot.

“analyysi” may be unfamiliar with the law here in the USA. The grounds for the complaint in KvD was indeed the establishment clause of the 1st amendment. The legal history will clarify why science is discussed at length in KvD. The Epperson v. Arkansas SCOTUS decision declared that one cannot prevent the teaching of science to privilege particular religious accounts, that science instruction has a valid secular purpose. Since Epperson, the religious antievolution movement has proceeded with a variety of dishonest efforts to characterize the same old arguments they usually make as science and to aid in this they offer new definitions of science. If they could convince a court that what they offer up for inclusion in a classroom is science, they would then have demonstrated a valid secular purpose in having it taught. And so in the KvD case you had the defense present lots of testimony from expert witnesses claiming that “intelligent design” was, indeed, scientific in character, at least as long as you allow them to also tweak the definition of science.

There are people who like to claim that Judge Jones could have completely ignored all the arguments made by the defense that ID was science and by the plaintiffs that, no, it wasn’t. I think the decision would have been weaker if it failed to address an issue that both parties considered central to the suit. The reason that a discussion of the nature of science and whether ID meets criteria to be recognized as science appears in the decision is that both parties made it an issue and prior precedent made whether something is science an issue for determining whether something has a valid secular purpose in being taught. The point in law being addressed is still establishment of religion while the particular instance of argument concerned ID’s lack of status as science.

Hope that clears that up for “analyysi”.

<> 9429 2365 >

A Brief Monty Hall Problem Digression

At lunch at the Spoonbill Bowl on Saturday, I was privileged to volunteer with a group of students, faculty, and researchers. It was a long day. Lunch was provided, and I got to sit down with a colleague and a couple of faculty members from USF St. Petersburg. One of them posed a brain-teaser question. I followed up with broaching the Monty Hall problem.

Just to make sure everyone is on the same page, I’ll briefly describe the Monty Hall problem. In the television game show, “Let’s Make a Deal”, host Monty Hall would offer a contestant an opportunity to win a major prize, let’s say a new automobile. The stage would show three doors (“Door #1″, “Door #2″, and “Door #3″). The major prize is behind one of the doors. Behind the other two are booby prizes, let’s say that they are goats. The contestant is allowed to pick a door. Rather than simply opening the contestant’s pick, Monty would have a door the contestant did not pick opened to reveal a goat. Then, Monty would offer the contestant a choice: she could stay with her original pick, or she could switch to the other door that had not been opened.

The Monty Hall problem poses the question of strategy: Is it better to always stay with the original choice, to always switch to the other remaining door, or does it not matter one way or the other? This question was posed many years ago in a column hosted by Marilyn Vos Savant and gave her months of correspondence as people argued with her advice to always switch. Marilyn was right, of course. The problem and just how counterintuitive the result is has proven a popular topic since then, and Jason Rosenhouse even has authored a book about it.

Back to my luncheon discussion. Me bringing up the Monty Hall problem led to about twenty minutes of trying to explain to one of my lunch companions why always switching was the right choice. It was a microcosm of the entire history of the public history of the problem, and I found it frustrating that I wasn’t able to more clearly and simply put it so that my companion could be convinced of the correctness of the answer. What finally made sense to my companion was that if one enumerated all the permutations, staying won in one-third of them, and switching won in two-thirds of them.

So I decided that I would make up a set of business cards to make future discussions of the Monty Hall problem go faster. Here is my graphic:

While I can’t include all the text that I would like on something the size of a business card, I can use this to quickly demonstrate why switching is actually the better strategy. The card shows all nine possible ways that the game can be played. It also shows that in only three of those does staying with the initial pick work out to a win for the contestant. In the other six ways the game works out, the contestant only wins if they switched.

I think I’ll put a version on a T-shirt.

Update: During lunch today, I tested out my card as a tool on a Monty Hall Problem-naive colleague. Her initial hunch was that staying with the initial choice was the strategy to pursue. I said that I would try to convince her that switching was the correct strategy and produced a card. I pointed out that every possible way the game could go was represented, and in only the top row did staying work out to a win. Within two minutes, I had convinced her of the correctness of the switching strategy. So that’s one data point.

Also, I’ve updated the graphic here. I’ve changed the color scheme. Diane pointed out that it would be hard for color-blind people to distinguish differences in the original. I’ve also added door numbers to make it clearer that each block of three rectangles represents one set of doors. And I added drop shadows to the doors just because I think it looks better that way.

<> 14996 5159 >