This month's mix rescue features Netherlands-based band, Awash, the line-up of which comprises (from left to right) Maarten Ligtenberg (who sent the initial mix to SOS), lead vocalist Charlotte Broeder, guitarist Geert van der Burg and, on bass, Peter Süoss.
The song that forms the focus of this month's Mix Rescue, 'Seven Years', is performed by a Dutch band called Awash, and was originally written for voice and acoustic guitar. Those two elements still form the basis of the 'band' recording by singer/guitarist Maarten Ligtenberg, but Maarten embellished the recording with further parts. There are both male and female vocals in the song (from Maarten and Charlotte Broeder respectively, with one short section where Charly — as she is better known — overdubbed a further harmony line. Maarten added drums, courtesy of Toontracks' EZ Drummer Nashville Kit, clean electric guitar parts for the verse and chorus, a DI'd bass guitar, and some sampled piano lines, including a solo, and a short instrumental string section in the middle of the song. There are also two short sections of drawbar organ, and other tracks include shaker, a tambourine and a reverse-cymbal effect.
In all honesty, Maarten's mix wasn't at all bad, but he wasn't happy with the sound of the acoustic guitars, and he felt that the overall mix lacked a bit of weight. He was also unsure as to how best to improve the sound of the DI'd bass guitar. The problems I identified were largely in accord with Maartens own observations, but on the tracks I was sent originally, I could also hear some very obvious Auto-Tune artifacts, where Maarten had tried to get things sounding just a little too perfect. Because of this extensive vocal pitch processing, I had him send me the original untreated vocal files. These showed up some pitching problems, and there were also parts where the male and female parts were noticeably out of time with each other, as they'd used very slightly different phrasing, but at least these parts sounded much more natural.
The acoustic guitar comprised a strummed rhythm part that Maarten had layered by recording the same part twice on separate tracks, with guitarist Geert van der Burg adding a slightly different part on a third track. Both DI'd and miked versions were sent, but the miked versions didn't sound particularly good, due to mic placement, room acoustics or both. The DI sound was nice and clear but had the brittle edge that under-bridge pickup systems often deliver.
The whole song came to me as 35 separate audio tracks, some mono and some stereo. Maarten had sensibly split the drum track into its component parts, including separate room ambience and overhead mic tracks — which meant that I'd have plenty of control over the drum sound in the mix — so one of my first tasks was to create a bus 'subgroup' for these, to make them more manageable and to facilitate overall processing where appropriate. I set up similar subgroups for the vocals, the strings that made up the instrumental section, the three piano tracks plus synths, and the electric guitars. (Adopting this approach means that once the individual sections have been balanced, you're able to manage your mix using a relatively small number of faders.) I then parked the bass guitar fader alongside the bus faders in the Logic Environment for ease of mixing.
Fixing the acoustic guitar sound was a bit of a challenge, and after trying various EQ and compression strategies, I decided to ditch the miked tracks and work on the DI sound. If you've tried EQ'ing DI'd acoustic guitar before, you'll know that it is almost impossible to keep the high-end zing without also retaining the harshness that comes from the piezo pickup, so I thought I'd try something a little more sophisticated, using Logic's Match EQ. This is a fairly conventional type of 'fingerprint' equaliser that attempts to match the spectrum of a target sound to that of a reference sound.
For a reference sound I initially chose an instrumental recording of a solo guitar that had a clean, bright tone, fed it into Match EQ and then applied it to the DI'd guitar track. To my ears, the sound was much improved — but still not quite what I wanted for a strummed sound! If I'd had the right type of reference recording this strategy would no doubt have worked out OK, and as a last resort I could easily have set up a mic and made my own reference recording... but then I remembered that my Fishman Aura processor had just come back from being serviced. This device is specifically designed to make DI'd guitars sound more natural and, as far as I can tell, it is itself a kind of fingerprint EQ, whose presets have been created using recordings taken from a range of real instruments. After routing one of the guitar tracks through the Aura I set about checking through the presets until I found one that seemed to work well in the context of the song. I could then have gone on to re-record each of the three guitar parts via the Aura, but instead decided to try a couple of experiments.
The first was to use Logic's Impulse Response (IR) utility to create an IR of the Aura that could be used within the Space Designer convolution reverb plug-in. This utility program fires a sine-sweep signal through whatever device you're trying to capture, records the output and analyses it in order to create an impulse response. (If you don't use Logic, there are a number of similar utilities available from the likes of Waves, Voxengo and Audioease). The experiment was reasonably successful, as IRs are a good way to capture linear processes such as EQ, as well as reverb (they don't work for dynamics processors such as compressors). I reasoned that as the Aura was essentially an equaliser with a huge number of bands, using Match EQ should clone it reasonably well, and would probably mean I could use less CPU overhead in the process.
I recorded just a short section of the DI'd guitar track routed through the Aura, then used this as my reference. Having set up Match EQ to impose the modified spectrum onto the original sound by 'learning' the same few bars of both the source and target versions of the sound, I saved this EQ setting as a preset and then applied it to all three acoustic guitar parts. To my ears, the result was pretty much the same as running the tracks directly through the Aura unit, and now I had a pretty usable tone that would respond to conventional EQ and compression. In fact, as the original Aura used only 16-bit conversion, the Match EQ version should, if anything, actually prove to be quieter. The main thing I now had to do was roll off some low end from the acoustic guitar group to get it to sit in the track without muddying the lower mid-range.
For the bass guitar, I fed the track through Logic's Guitar Amp Pro plug-in and then experimented with the amp models, speaker models and amp EQ until I got more of a miked-up bass sound, with just a hint of distortion creeping in to warm it up. A 'UK 30 Watt Combo' model teamed with a 'British EQ', a '4 x 12' cab model and an off-axis condenser mic model gave the required result. Not only did this make the bass more audible, by emphasising the mid-range, it also helped disguise any high-frequency finger noise, due to the natural roll-off of the speaker emulation. This shows that you don't always have to pick a bass-amp model to get a good bass sound, as long as you use a suitable speaker model. A little compression, courtesy of the LA3A compressor plug-in for the UAD1 DSP card, evened out the sound.
For the two electric guitars, I again resorted to the Guitar Amp Pro plug-in, as I felt the original sound was just a bit too clinical, despite having been recorded via an amp model. For the chorus section, I inserted Guitar Amp Pro after a compressor, to give the sound a touch more sustain due to the more 'lead-like' nature of the part, but the verses needed no compression other than from the amp simulation. Here, I settled on a 'UK Class A combo' amp model with a 'Modern EQ' and a 4 x 12 closed-back speaker simulation. Low drive settings were used to create an almost clean sound with just a hint of warmth, and I added a touch of the vocal reverb I'd set up on send one, which we'll come to in a moment.
The EZ Drummer Nashville Kit is actually a lovely-sounding virtual drum kit without applying any EQ, but Maarten was after a bigger sound — so I decided to try the SPL Transient Designer plug-in, recently added to my Universal Audio UAD1 collection. The SPL Transient Designer was originally only available as a hardware box (you can now get plug-in versions for the UAD1 and Creamware Scope platforms) but it is one of the most elegant processors I've come across: it needs only two controls to operate, despite the complexity of what goes on behind the scenes. Essentially it is a type of compressor but, rather than the user having to set a threshold, it uses a floating threshold system, whereby it continually adapts the threshold to the dynamic nature of the source material. This means that it can work over a very wide range of input levels with no need for any thresehold adjustment. As far as the user is concerned, it works more like an envelope modifier, with one knob adding to or subtracting from the attack portion of the sound and another doing the same for the release. In this case I put a stereo version of the plug-in over the complete drum submix and used only the release function to effectively add sustain to the drum sound, by lifting the level of each beat as it decayed. The outcome was a bigger, more roomy sound that would have been difficult — if not impossible — to achieve by other means. Other than that, only a small amount of 'smile curve' EQ was needed to produce a suitably fat drum sound.
Once I'd received the unprocessed vocal tracks, I decided to use Melodyne to tune them, as this plug-in is very controllable and also includes tools that are not available in other programs. Melodyne can also be used to move or stretch sounds, but I didn't use it for that purpose because I like to be able to view the waveforms of multiple vocal tracks alongside each other when sorting out timing problems. (You need the stand-alone version of Melodyne to process multiple tracks, but even then I prefer to be able to line things up alongside tracks such as drums, that don't require any pitch correction.) Essentially, I allowed Melodyne to do an automatic pitch-correction by setting the correction strength to around 95 percent, then I went through the vocal parts watching the Melodyne display and manually corrected any notes that had been pushed to the wrong pitch, and in a couple of cases reduced the amount of pitch drift that occurred during notes. There were still a couple of places where the vocal sounded a little less than perfect, but this was more to do with delivery than pitching, so I called a halt when I felt I'd gone as far as I could without the result sounding unnatural. Once I'd done with tuning the vocals, I bounced them down to new audio files and dropped them back into the song in place of the originals: that way I wouldn't have the CPU load of two Melodyne plug-ins running all the time. This turned out to be a good move as I ended up having to freeze the bass guitar track to get the CPU load on my ageing G5 down to a safe and stable level.
In fact, the most time-consuming part of the process was matching up the timing of Charly's vocal part to Maarten's, as some sections were phrased differently. Where words are the right length but in the wrong place, you can simply divide the audio region and slide the offending words into place (which I did on several occasions), but where words are too long or too short, you need a different strategy... In the case of words that go on too long, there's usually an extended vowel sound somewhere, so if you make an edit mid-word to shorten it, you can often cover the join with a short cross-fade and get the length where you want it. If there are consonants within the word that you can see in the waveform display, you can line these up and edit the vowel sounds between them.
Things get more difficult where words fall short, but if your DAW has a time-stretch capability accessible from the arrange page as most leading DAWs now have, you can often stretch whole words or individual syllables to fill the necessary space — and in the case of backing vocals, any processing artifacts are usually minor enough to be adequately disguised once the whole mix is playing. The vocal sections marked in pink in the screenshot on the previous page are those where I've employed this trick, and I have to say the results worked out rather better than I'd hoped!
By way of send effects, I used only the UAD1's Plate 140 reverb with a 1.8-second decay and about 50ms of pre-delay — which comes very close to the sound of those old EMT plates, rolling off the low end below 200Hz to stop the sound getting muddy. I needed enough reverb to make the vocal sit in the track but not so much as to make the effect too overpowering. Individual vocal tracks were compressed using the Logic compressor plug-in, and rather than automate the vocal levels, I did a few destructive edits to raise quieter words or phrases by a couple of dB. I didn't put any compression or EQ on the individual vocal tracks but instead used Logic's EQ and the UAD1's LA2A compressor on the vocal submix bus. The EQ was set to warm up the voice with a tiny boost at 240Hz, balanced by a 1.3kHz dip to combat a slight nasal tendency, and 3.5dB of boost at 12.8kHz to create a sense of air and presence.
Because of the nature of the song, I mixed the vocals well to the fore and balanced the reverb level with the rest of the mix playing, just to give a nice, polished sound. When solo'd, the amount of reverb sounds rather generous, but in the context of the whole mix I think it works well. For a touch of ear candy, I automated the panning on the one reverse-cymbal sound halfway through the song, so that it whooshes across the soundstage, but otherwise kept everything very simple so that the vocals could come through with no distractions.
To hear before and after examples, both of the whole song and of some of the individual elements that make up the mix, go to www.soundonsound.com/sos/aug08/articles/mixrescueaudio.htm
The first step was to get the mix sounding well balanced, with everything panned to the centre. I started with the drums, acoustic guitars and bass, after which I used subtle panning to create some separation between the chorus electric guitar and piano parts. Panning was also used to spread the string layers in the centre section and the three acoustic guitars, but when it came to the drums I felt the panning on the original stereo files was actually too wide, as it spread the kit over the entire stereo stage. I inserted Logic's Direction Mixer plug-in on the drum bus and narrowed the width to around 50 percent of what it originally was. There was also a bit too much snare rattle for my taste on the intro tom fill, so I dropped the overhead and room mic levels just for this part.
To give the song more dynamics, I cut out two of the three acoustic guitar parts in the first verse and chorus. The piano joins in during the first chorus, then the remaining acoustic guitar layers come in for the third verse, along with the organ swirl, adding to the build-up throughout the piece. I also found it useful to automate the acoustic guitar level so that it could drop back a little during busy sections to create space. There was initially a problem with the piano solo that followed the string interlude getting a bit lost (despite the apparently high level), so I compressed it fairly hard to bring it back up-front, which also made it sound punchier. With the miked versions of the acoustic guitar tracks taken out, I had the mix down to 32 audio tracks and an automation track for the acoustic guitar submix, but I could handle the mix using just a few channel faders and the bus faders I'd set up earlier.
Though I'd never advocate using mastering plug-ins if a mix is going to be professionally mastered, for the purpose of this exercise I used PSP's Vintage Warmer, followed by TC Electronic's Precision Limiter for Powercore, set just to catch the very tops of the occasional peak. Drive for the Vintage Warmer was set to +2, with the compressor Knee dial set to 10. I added around 1dB of bass boost at 71Hz but left the top end as it was, as I'm conscious that many mixes are uncomfortably bright. This compression setting gives quite an aggressive amount of gain reduction, but when you use the wet/dry mix control, the effect becomes much more subtle, lending girth and support to lower-level sounds without squashing the life out of louder ones. The limiter takes care of the odd over-zealous peak but it is really used as a safety net, rather than as a means of creating more apparent level: I simply don't like the sound of heavy limiting!
Looking at the mixer page, I was also surprised at how many of the individual tracks didn't need any processing at all, as in many cases I was processing the overall bus submix instead — which, again, is a great economy as regards CPU overhead. The one plate reverb served for everything in varying degrees, so although I'd originally set up two send buses, I ended up not requiring the second one, and I'd also needed only a tiny bit of mix automation.
Overall I was quite pleased with the final mix. It had a big, pop sound without being too aggressive, and the vocals came over loud and clear while still sounding like part of the band rather than being 'stuck on'. I was particularly pleased with the way my tricks for fixing the vocal timing had worked out and — although I liked the original mix — I think what we ended up with sounded more homogenous.
Maarten: "I like the more direct sound of Paul's mix, and that it seems to have more going on in the highs and lows. If I switch back between Paul's mix and mine, his seems to be more open, whereas mine almost sounds honky in comparison. I also like the way he has created a very bright yet punchy sound, without sounding too harsh.
"The fingerprint EQ trick is a nice one, and will allow use of my acoustic's DI output — which I normally record as backup, but hardly ever use, due to how it sounds.
"My difficulty in getting a bass sound that can be heard throughout the song without becoming too boomy seems to have been nicely tackled by the am-sim model and compressor combination.
"All in all, it was a very interesting and educational experience for me to have the same tracks that I must have heard hundreds of times, during tracking and mixing, suddenly being presented to me in such a different way, with some nice new techniques to boot."