I saw the NCIS episode “One Shot, One Kill” and noticed a blooper in the show. Maybe that’s not that cool, but this particular blooper requires knowing something about acoustic localization. This is the technology that is being used to let marine mammal researchers place the position of whales who are vocalizing and also lets police departments know about where gunshots have been fired. The basic idea is that one places a bunch of sound transducers in known positions, and one can — with the aid of a bunch of math and computer power — estimate where a sound originated.
In the case of marine mammal researchers, Whitlow Au and the research group at the University of Hawaii have had a four-hydrophone array, where the hydrophones are arranged in a tetrahedral shaped, and the whole thing is a bit over a meter across, IIRC. With a sufficiently fast simultaneous-sampling data recorder, they can get a reasonably good bearing and range estimate on a whale or dolphin.
For the police, there have been installations of microphones in several cities. First, a microphone detects a gunshot. Second, another program delivers a localization estimate. Some of these are claimed to be accurate to about 80 feet. Given the reverberant qualities of sound propagation in the urban environment that’s either a testament to amazing skill on the part of the engineers, or amazing BS on the part of the marketers.
So if gunshots can be acoustically localized, what was the problem with NCIS showing use of the technology? It came in the form of having the goth forensics guru character, Abby Sciuto (played by Pauley Perrette), showing a graphic on a computer monitor supposedly giving the result of the localization for a gunshot. The graphic showed a linear array of three microphones and three straight lines running through the estimated shooter’s position and each of the microphones. Nice, simple to grasp, and wrong. First, acoustic localization will give you a half of a hyperboloid as a solution for a time-of-arrival difference between any pair of sound transducers. The estimated location is going to be at places where multiple hyperboloids intersect. Even if one simplifies things to being more-or-less restricted to a 2D solution, there isn’t much call for showing a straight line when graphing an acoustic localization. Second, anyone worth a flip doing acoustic localization for a known sniper situation isn’t going to deploy just three microphones relatively close to each other, and certainly not with with them in a linear array. The best situation to have for acoustic localization is to have the sound source within one’s array of microphones. If you have to deploy a small number of microphones, and can’t get a long baseline, staggering them so there is not a straight line through the positions is going to help. With a symmetrical situation like the line of three microphones, one has poorer localization the closer a source is to being on that line. (On the line, there is no localization of a source outside the microphones; time of arrival will tell you on which side of the array the source is, but whether it is eight feet or 800 yards from the outside microphone isn’t going to be determined by time of arrival differences.) Using a triangle when in a 2D situation or a tetrahedron for 3D is going to work out better.
It makes for an interesting question of what the best placement would be if one were planning
to do acoustic localization of a sniper and one only had three mics, but knew where the target was, and that the only distant approach came from one side of the building where the target was. Offhand, I’d put one mic on the line normal to the target’s building, either on the building or 50 to 100 yards to the front of the building. The other two would go out to either side about 300 yards and a total of about 1,200 yards from the building. That would help make it more likely that a sniper spot is within the triangle formed by the three mics.