Wednesday, April 28, 2010
Mix Technician or Mix Artist?
Tuesday, April 27, 2010
NF Audio + Earthling Designs: Looking out for number one

So I reach one of life's many crossroads of sleepless nights, small numbers in bank accounts and looking around your living room for things you could put on eBay to make some money to buy lunch. I could get a job making coffee in a flash. I spent the majority of my years at university doing it for a crust and there is work everywhere for those who can do it. Yet, I am too proud. The day I was able to work in professional audio alone and leave my part time "shit-kicker" job behind was one of the best of my life, and to walk back into a job like that would feel like a major step backwards in my career.
What I am essentially saying is everyone who does anything in this music game wants to make a living from it, and who can blame them? It's great! So I said to myself one night, "fuck it, I'm a sound engineer", I'm not going back to a crap job for crap pay just to survive until my work exists again and began to plan how I could pull this off.
Which brings us to the point of this blog... where NF Audio is now. As you can see by the new banner at the top of the blog, I've got new digs (seems like I'm moving studios more often than I'm using them, I know). The new place is shared with Earthling Designs, which is sweet by me as I can get him to build me cool stuff and I can record his cool stuff.. It's called symbiosis or something, I think.
In a drastic career change I've actually put my skills on the (soldering) iron to use, and even stranger is that it's actually working! Enter the new branch of NF Audio: NF Audio Pro Audio Products. I've been building Reamping and Direct Inject units (see pictures) and they're selling like hotcakes. Brilliant!
Of course I am still doing lots of audio work. I'm mixing an EP for the Little Lovers in the coming weeks and next month recording the fantastic Bill Gibson singing on Ryan Elsemore's (Scruffs, The Wake Ups, The Stiffies) solo album.
So the moral of the story is that you have just read a self indulgent rant about me. No, seriously, it is that if you want to be something, a musician, and artist, an engineer, just decide that that's what you are and go do it, you'll find a way.
While you're hanging abouts, go check the new website too http://www.nfaudio.com/
Cool,
Nick.
Thursday, March 18, 2010
RIP Alex Chilton
Monday, September 14, 2009
Solo Record
Monday, August 10, 2009
Sugar, sit a while.
- Digidesign Pro Tools Rig 16 in, 16 out.
- Allen and Head 24 Channel all analogue mixing console
- Tascam MS-16 Analogue tape machine
- GSSL Stereo Mix Buss Compressor
- Earthling Designs Green Pre
- Joe Meek TwinQ
- Nuemann W193 Mono Parametric EQ
- Focusrite Dual Channel Strip
- Evans Analogue Delay
- Alto CL2 Compressor Limiter
- Waves Mercury Bundle Plugs (who cares?)
Thursday, March 19, 2009
Recording Eush - Pt 1: Live Tracking
I recently went with Sydney band “Eush” into the studio to record two songs, which I would also eventually mix. Having recorded them before and seen them play live, I knew what to expect, but that’s not to say nothing unexpected arose during the session! These sessions were amongst the most fun I’ve had recording a band, but at times, equipment failure and time constraints became frustrating. I’ve documented the recording session in this blog so that if you like any of the sounds on the recording, you can have a go at recreating them. Assisting me on the tracking sessions was Pete Kossen.
The first session was dedicated to tracking the two songs live with the whole group playing together. Eush is a three piece, consisting of James Waples on drums, Sean Van Doornum on guitar and vocals and Nick Hoorweg. Each member is a talented player; in fact James and Nick are amongst the busiest players in Sydney when it comes to Jazz. Eush however are certainly not a Jazz band, hence I didn’t approach these sessions like a Jazz recording at all.
Setting up the band to play live in the small space was a challenge, but by no means impossible. We started by micing the drum kit at one end of the live space. James uses a very minimal setup which always makes the engineers life a dream! Using just one mic on the kick, one on the snare and a pair of overheads, we achieved a solid drum sound very quickly. Normally on a small, jazzy kick drum like James’s I’d have used a soft, round sounding microphone but the first track had a very sharp, percussive rhythm to it, so I instead emphasized the beater slap using an Audix D6, which has a naturally scooped frequency response. It was funny actually, one of the assistants from the studio walked by whilst we were getting a drum sound and said “D6”, to which we replied “Correct!” This goes to show the instant recognition of this microphone’s sound!
SCX25 Frequency Response

D6 Frequency Response

The snare grabbed some attention from a Shure SM57 (I know, boring), and the overheads were covered by a pair of Audix SCX25s. These mics are relatively uncommon but I find their naturally rolled off top end flattering on a Jazzier sounding kit. First comical moment of the session came here when James found a cymbal that had the word “ping” scrawled onto it in permanent marker. Upon setting it up for giggles we realized that’s exactly the sound it made. It ended up making it onto the tracks and you can probably work out which one it is.
With the drums sorted, we ran the bass through an Electroharmonix valve DI/mic pre, known as the MP-1. The box belonged to the studio, and I’ve not really been able to find much info on the unit. I wanted to get my hands on one or build a clone because it sounded really nice, but no schematics or info seems to be floating around. Might have to open it up one day and take some pictures and get Lachlan Colquoun to do a bit of backwards engineering. I’d have liked to put a mic on a bass amp but because of the limited space it wasn’t really an option. We could always reamp the bass later if need be.
Sean’s guitar amp was place in the adjacent booth, with a tie line running from the main space so he could stand with the rest of the band, which, by the way was very important. Never encourage band members to play in separate rooms or overdub their parts just for the sake of making your job in the mix easier. We miced the amp (Fender Deluxe) with an SM57, about 4 inches from the grill. After a quick listen I had Pete rotate the microphone around the capsule until it took the bite out of the high end.
With this set up the band rolled about 5 or 6 takes and we chose the best one. Not the one with the fewest technical errors, but the one with the best feel and groove. Technical errors can often be corrected, or add character if you just leave them, but there is no way of injecting fake groove into a tune.
The second song featured upright bass instead of the electric. We took both a DI and a microphone signal from the bass, which sounded great with the right blend between the two. Microphone of choice was a Neumann u89, which captured the growl and tone of the bass fantastically. We placed a gobo between the bass and drums to minimize the spill, but provided the band pulled off the take together, spill wouldn’t be much of a problem.
Again we rolled about 5 takes of the song and selected the better one. We wrapped up the first session here, but not before backing up our session to a few sources!
Go make a record.
Sunday, March 15, 2009
Stereo Microphone Technique and Human Hearing
Far too many audio engineers simply throw up a pair of microphones in a common stereo position when they feel the situation calls for it, without consideration or an understanding of why we use stereo techniques, and how the techniques were derived in the first place. From drum kits to orchestras, we can’t get enough of the illusion we can create that is stereo.
In order to create unforgettable stereo recordings, it is probably best we understand how our ears perceive and localize sounds. Using two ears alone, our brilliantly designed hearing system can accurately pinpoint the location of any sound, whether it is behind, in front, above or below us. The ears actually have little to do with it, and it is mostly thanks to hard work by our brain.
Three factors are taken into consideration by our brain when we hear a sound and need to decide where it is coming from. Firstly our brain detects what is known as “inter-aural time difference”. This is the delay between when a sound arrives in one ear and when it reaches the other, for example; a sound that emanates from the left of a person will arrive in their left ear before the right, thanks to the fact that sound takes time to move.
Secondly, the brain will notice that the sound is louder in one ear, obviously the ear closest to the sound source. To test the theory, feel free to fire a shot close your ear, Kurt Cobain was a budding audiologist did you know? Finally, our brain will calculate that the sound in one ear is slightly duller in one ear. This is because as we know, high frequencies are easily absorbed by practically anything, and in this case are being absorbed by our head before they reach the ear furthest from the sound source.
So, to recap that little section, a listener who hears a sound on their right hears a louder brighter sound in their right ear, shortly (and I mean milliseconds) before hearing a slightly quieter, duller sound in their left. Almost in real time, the brain puts this jigsaw together and the listener knows where the sound is coming from, which is incredible considering at the same time it dredges up memory to realize what the sound is, opinion to make a judgment about the sound and nervous reaction to respond to it. OK, the part we’ve been waiting for; I’ll explain what all this has to do with stereo microphone technique.
For the most part, when using two microphones in stereo we’re attempting to replicate a natural stereo spread, the way we would hear the instrument or ensemble if we were to stand in front of it. By understanding how our ears perceive and localize sound, we’re a few steps closer to being better at this replication. Common stereo techniques employed by just about every audio engineer out there will replicate one or two of the three factors mentioned earlier, but very few can mimic all three. Crossed cardioids (XY) for example, will translate amplitude difference, but not time difference, as the two capsules are located in pretty much the same position in the air. Spaced pairs (AB), can on the other hand emulate arrival time differences and amplitude differences, but don’t replicate the spectral change caused by high frequency absorption.
The only way to achieve that third factor is to place something with a similar density and texture to a human head between your microphones. I prefer to use a real severed head, but short of that you can use some high density foam gaffed up into a head like ball (sticky eyes optional), or get your hands on that Dummy Head microphone by Neumann. That mic was actually used on the latest Radiohead record on piano and voice on some songs, and whilst it doesn’t sound overtly stunning on speakers, do yourself a favour and have a listen to those tracks on headphones…brilliant! The reason it doesn’t sound amazing on speakers throws up a bit of a conundrum, and you’ll need to consider this on your next stereo recording.
Whilst capturing a recording that takes into account all three of the abovementioned factors produces exceptional results for headphone listening, it is less good for speakers. This is because once the sound leaves the speakers for the listener; we experience all three factors again. Sounds that were captured louder in one microphone will be perceived again as louder as it leaves one speaker, sounds that took longer to reach one microphone will again take longer to reach one of the listeners ears, and sounds that were duller in one microphone will be dulled again by the listeners head. As you are probably gathering, this secondary processing doesn’t matter too much when it comes to time difference and amplitude difference, but two rounds of spectral difference only results in an underachieving stereo image.
So what can we take away from this? Primarily a better understanding of how common stereo microphone techniques relate to the workings of our ear and brain, and secondly, a better idea of when to choose a particular technique. Radiohead primarily distributed their latest music as a download, hence most people who obtained it would probably listen on headphones. In this case, using the Dummy Head microphone was a wise choice because most listeners will enjoy the incredible realism of the technique. On a record destined to be primarily listened to on speakers, maybe XY or AB would be a better choice. Think about it next time you whack a stereo pair above that drum kit.
