Monthly ArchiveMay 2010
Photography Wesley R. Elsberry on 20 May 2010
Word was that the “House” finale that aired this past Monday was shot on a DSLR with 1080p HD video.
When the shots had deep-focus, all looked well. However, whenever there were large regions of darker bokeh, it was obvious that there was some pretty serious quantization going on. I’m not sure what needs adjustment for the video capture, but it looks like there’s still something for digicams to catch up with on regular video gear.
Now all I need is someone to tell me that, no, those scenes weren’t done on the DSLR…<= get_option(\'vc_tag\') ?>> = get_option(\'vc_text_before\') ?> 1193 = get_option(\'vc_human_count_text_many\') ?> = get_option(\'vc_preposition\') ?> 442 = get_option(\'vc_human_viewers_text_many\') ?> = get_option(\'vc_tag\') ?>>
Computation Wesley R. Elsberry on 20 May 2010
Following up on a comment from Dick Hoppe, I expanded upon the data compilation I wrote about earlier concerning the Manatee County 2010 Tax Certificate Auction. Now I’m pulling in data from three additional pages and have it all tidily summarized in the resulting comma-delimited CSV file. I made a short demo CSV file with three of the entries so people could pull it into a spreadsheet and see how it works. I made a page to explain what I had and why an investor ought to want to have it here, and that includes PayPal links for people to pick either the MS-DOS/Windows or the Unix/Mac OS X version.
My biggest problem is there is a small market for this, and I don’t really have a good way to make them aware that there is an alternative to them doing all their information look-ups manually themselves. I tried making a posting to Craigslist, but all the responses I’ve gotten so far are spam.
Anybody else have experience with time-limited, targeted market information compilation marketing?<= get_option(\'vc_tag\') ?>> = get_option(\'vc_text_before\') ?> 340 = get_option(\'vc_human_count_text_many\') ?> = get_option(\'vc_preposition\') ?> 173 = get_option(\'vc_human_viewers_text_many\') ?> = get_option(\'vc_tag\') ?>>
Manatee County offers tax certificates to bidders. When property owners fail to pay their taxes, and that is happening a lot right now, the county gets other people to pay the taxes and gives them a tax certificate, which is a lien against the property. Each year, an auction happens where people can bid to get these. The bid amounts are in percent interest, and range from 18% at the high end down to 0%. The person bidding the lowest percent interest gets the tax certificate, after, of course, they pay the county the outstanding taxes.
Today, there was a practice auction. This is all handled online now. The page included the option to download data on the 9,000+ properties in CSV, XLS, or XML formats.
Diane is interested in the process and specifically in the land just to the south of our property. It currently has unpaid taxes, and if the executors of the former owner’s estate don’t pay up by June 1st, it will be included in the tax certificate auction. But she is also interested in what else is available out there.
That brings up an interesting problem. The downloaded data is minimal, giving just a parcel ID, outstanding tax balance, and some auction-related attributes. On the other hand, Diane would like information that is available online from another county office, that of the Property Appraiser.
I worked on a Python script to handle the job of getting additional information on acreage, zoning, the address, and bits like that. I hadn’t done anything with Python regular expressions to date, and started looking at that and getting less enthused by the minute. The issue is getting data out of an HTML page downloaded from the Property Appraiser. I could have it done in Perl right offhand, but wanted to develop my Python skills a bit.
On the other hand, getting the job done is the top priority, so while looking stuff up, I ran across the BeautifulSoup module for Python. The web site sounded promising, and a number of other people seemed to have found it useful. Very useful.
BeautifulSoup is an HTML/XML parser. It aims to not only handle clean XHTML, but also to do reasonable things with the sort of HTML people were writing when the Web was young, in other words, bad HTML.
I downloaded the module distribution, and got it uncompressed. Setup is simply
python setup.py -install
My usage so far is to pluck values out of adjacent cells in a table. I can load a BeautifulSoup object with the HTML in question, then ask it to find the label I’m looking for in text. Then I just ask it to retrieve the next text in the document, and that is the stuff I’m looking for.
Anytime one gets started with a library to do a job, it can take a while to get going with it. BeautifulSoup let me get my job done without a lot of effort on the initial learning curve. Right now, my script is about halfway through getting the additional data wanted for those 9,000+ properties. We’ll be able to look it over in the morning. The whole script I’m using is less than a hundred lines of code, and that reads in a CSV file, traverses that, gets the associated profile page from the Property Appraiser for each property, parses that with BeautifulSoup, adds the additional fields of info to the original, and writes out a new CSV file with the more complete data set.<= get_option(\'vc_tag\') ?>> = get_option(\'vc_text_before\') ?> 384 = get_option(\'vc_human_count_text_many\') ?> = get_option(\'vc_preposition\') ?> 192 = get_option(\'vc_human_viewers_text_many\') ?> = get_option(\'vc_tag\') ?>>
I left a comment to the opinion letter of a Greg Swank
, M.D.. Dr. Swank gave a Gish Gallop and finished up with argument from authority.
My background is in the medical field and I find it interesting that from a science background I am defending Intelligent Design as a scientific probability, while Rev. Ward defends evolution.
So here’s my bit, just fitting into the character limit on comments there:
Dr.Greg Swank has overlooked some information. The objections that he notes in his letter, plus the hundreds more he didn’t have space for, are listed — and rebutted — in Mark Isaak’s “Index to Creationist Claims”. This resource is available online at http://talkorigins.org/indexcc .
Evolutionary science is rather more productive than Swank admits, being an indispensable part of even medical research. It shapes our best knowledge on why indiscriminate use of antibiotics is bad, leading to today’s “superbugs” like MRSA, and why developing a vaccine for HIV is hard work, since HIV evolves so quickly. It also contributes to agriculture. The Soviet Union rejected evolutionary science and adopted Lysenkoism, leading to decades of crop failures and famine. When China adopted Lysenkoist antievolution in 1958, they went from grain surpluses to a famine that killed tens of millions of people.
Evolutionary science is worth learning about and teaching.
Wesley R. Elsberry, Ph.D.
Update: I added another comment attached to Swank’s letter:
Dr.Swank advances “intelligent design” (ID) as a “scientific probability”. But the transcript of the 2005 “Kitzmiller v. DASD” trial in Pennsylvania plainly showed even the ID advocates admitting in sworn testimony that for ID to be considered as science, one must use a definition of science broad enough that astrology would fit the bill, too.
The transcripts are at http://www.talkorigins.org/faqs/dover/kitzmiller_v_dover.html
When it comes to ID-speak about probability, I can go
Dr.Swank one better: I have published articles in the technical literature on exactly this topic. Versions of those are available online at http://www.talkdesign.org/faqs/theftovertoil/theftovertoil.html and http://www.antievolution.org/people/wre/papers/eandsdembski.pdf
ID is simply a form of religious antievolution, not science. Its content is entirely a subset of the arguments advanced before under the “creation science” label. ID is a sham to evade contrary legal decisions.
Wesley R. Elsberry, Ph.D.
Update: Other commenters say that Swank is not an M.D. I posted one apologizing for having given Swank more credit than someone merely “in the medical field” deserves.<= get_option(\'vc_tag\') ?>> = get_option(\'vc_text_before\') ?> 892 = get_option(\'vc_human_count_text_many\') ?> = get_option(\'vc_preposition\') ?> 340 = get_option(\'vc_human_viewers_text_many\') ?> = get_option(\'vc_tag\') ?>>