Sunday, March 15, 2009

Stereo Microphone Technique and Human Hearing

Far too many audio engineers simply throw up a pair of microphones in a common stereo position when they feel the situation calls for it, without consideration or an understanding of why we use stereo techniques, and how the techniques were derived in the first place. From drum kits to orchestras, we can’t get enough of the illusion we can create that is stereo.

In order to create unforgettable stereo recordings, it is probably best we understand how our ears perceive and localize sounds. Using two ears alone, our brilliantly designed hearing system can accurately pinpoint the location of any sound, whether it is behind, in front, above or below us. The ears actually have little to do with it, and it is mostly thanks to hard work by our brain.

Three factors are taken into consideration by our brain when we hear a sound and need to decide where it is coming from. Firstly our brain detects what is known as “inter-aural time difference”. This is the delay between when a sound arrives in one ear and when it reaches the other, for example; a sound that emanates from the left of a person will arrive in their left ear before the right, thanks to the fact that sound takes time to move.

Secondly, the brain will notice that the sound is louder in one ear, obviously the ear closest to the sound source. To test the theory, feel free to fire a shot close your ear, Kurt Cobain was a budding audiologist did you know? Finally, our brain will calculate that the sound in one ear is slightly duller in one ear. This is because as we know, high frequencies are easily absorbed by practically anything, and in this case are being absorbed by our head before they reach the ear furthest from the sound source.

So, to recap that little section, a listener who hears a sound on their right hears a louder brighter sound in their right ear, shortly (and I mean milliseconds) before hearing a slightly quieter, duller sound in their left. Almost in real time, the brain puts this jigsaw together and the listener knows where the sound is coming from, which is incredible considering at the same time it dredges up memory to realize what the sound is, opinion to make a judgment about the sound and nervous reaction to respond to it. OK, the part we’ve been waiting for; I’ll explain what all this has to do with stereo microphone technique.

For the most part, when using two microphones in stereo we’re attempting to replicate a natural stereo spread, the way we would hear the instrument or ensemble if we were to stand in front of it. By understanding how our ears perceive and localize sound, we’re a few steps closer to being better at this replication. Common stereo techniques employed by just about every audio engineer out there will replicate one or two of the three factors mentioned earlier, but very few can mimic all three. Crossed cardioids (XY) for example, will translate amplitude difference, but not time difference, as the two capsules are located in pretty much the same position in the air. Spaced pairs (AB), can on the other hand emulate arrival time differences and amplitude differences, but don’t replicate the spectral change caused by high frequency absorption.

The only way to achieve that third factor is to place something with a similar density and texture to a human head between your microphones. I prefer to use a real severed head, but short of that you can use some high density foam gaffed up into a head like ball (sticky eyes optional), or get your hands on that Dummy Head microphone by Neumann. That mic was actually used on the latest Radiohead record on piano and voice on some songs, and whilst it doesn’t sound overtly stunning on speakers, do yourself a favour and have a listen to those tracks on headphones…brilliant! The reason it doesn’t sound amazing on speakers throws up a bit of a conundrum, and you’ll need to consider this on your next stereo recording.

Whilst capturing a recording that takes into account all three of the abovementioned factors produces exceptional results for headphone listening, it is less good for speakers. This is because once the sound leaves the speakers for the listener; we experience all three factors again. Sounds that were captured louder in one microphone will be perceived again as louder as it leaves one speaker, sounds that took longer to reach one microphone will again take longer to reach one of the listeners ears, and sounds that were duller in one microphone will be dulled again by the listeners head. As you are probably gathering, this secondary processing doesn’t matter too much when it comes to time difference and amplitude difference, but two rounds of spectral difference only results in an underachieving stereo image.

So what can we take away from this? Primarily a better understanding of how common stereo microphone techniques relate to the workings of our ear and brain, and secondly, a better idea of when to choose a particular technique. Radiohead primarily distributed their latest music as a download, hence most people who obtained it would probably listen on headphones.  In this case, using the Dummy Head microphone was a wise choice because most listeners will enjoy the incredible realism of the technique. On a record destined to be primarily listened to on speakers, maybe XY or AB would be a better choice. Think about it next time you whack a stereo pair above that drum kit.

No comments:

Post a Comment