This month's mission proves that to get the best from a song you need to pay as much attention to arrangement as to mix technique.
Over the years, many students have asked me why the songs that they create and bounce as MP3s don't sound like the finished product that they hear on CD or vinyl. I find, as often as not, that the answers lie as much in their instrument choices and arrangements as in the mix processes themselves.
Before I get into the detail of this Mix Rescue, have a listen to the original mix — which is the version that the client, Lee Knickenburg, brought to me — and to my own mix. They can both be downloaded at /sos/sep09/articles/mixrescueaudio.htm. When Lee first played his track to me, he said "I think this is finished,” but I could tell that this was far from the truth. I sat down and explained to him that while he successfully got across some good musical ideas, the mix needed more clarity and contrast to make the recording work — and that I felt the best way to achieve that was to work on the arrangement, removing a lot of parts and adding some others in the appropriate sections to create more interest.
Let me explain more precisely what I mean by clarity and contrast (I don't mean to be patronising to those who know, but it's something that the uninitiated really need to learn.) 'Clarity' is achieved when the music is composed and arranged in such a way that all the instruments can be heard individually, with presence: in other words, when each instrument or sound is clearly defined, recognisable and not overpowering. You might layer multiple sounds, of course, such that they blend together and you can't recognise the individual components, but in this sense those layers combine to form a single identifiable 'instrument'. When I talk about 'contrast', I mean the tonal differences between the various sounds — in other words, the instruments' unique characters or timbres — that help to maintain the listener's interest.
Now I've explained what I mean by these terms, I can start to show you how I stripped down this mix to its individual elements so that I could build it up again from scratch myself. First, I asked Lee to leave me to it: I find that it's best always to try to dismantle the song without outside interference. When the client is present at this stage, it not only slows the process down, but also allows them to insist on keeping things that you feel should be removed. Also, some clients will try to take over in such a way that you'll be reduced to being an engineer under their command. At this stage, you need the freedom to make decisions without interference: you're trying to remix the track, not pander to your client's ego. So you must be be polite but you've every right to be assertive: after all, they've asked for your help.
If you take a look at the two screenshots above, you'll see that the original version (left) had most of the tracks playing in the verse and chorus, so job one was to clean up the tracks, removing silence and 'topping and tailing' all the audio. Not only does this clear out unwanted noise, it also means that your hard disk's read/write heads have less work to do (even 'silent audio' still has to be streamed from disk). I listened back to the track and sketched out a timeline showing when and where each instrument came in, what rhythm or sound it was producing, and my initial thoughts about what the track needed in different places. When I say 'sketched', you can see what I mean from the picture (left)! It needn't be pretty, but it's important to listen and make notes before you start diving into your software.
Armed with this information, I started to rearrange the track in Logic Pro, and invited Lee back to my studio to help, so that he could hear and understand where and why I was making changes (I know I said earlier that you should try not to have the client there, but I needed his approval for the decisions I'd taken, so that I could carry on with the editing).
If you've already listened to the tracks, you should have noticed that the intro, verse and chorus do not really differ musically, which is why creating contrast between them was essential. First up was the flute, which was beginning to drive me a little insane: it seemed to go on relentlessly, leaving no space for the keyboard stabs, and making the different song parts (verse, chorus, intro, bridge and so on) less distinct than I felt they needed to be. Music needs some degree of repetition, but the brain will eventually get bored by it, and a constant or unvarying part can spoil the impact of other rhythms or instruments.
The flute itself had a lovely sound, but I felt that it should only be heard at specific parts of the song. This would not only allow the listener to appreciate the stabs, but also give more emphasis to the flute when it did come back in. Next, I looked at the Indian drone sounds, which I liked because they added some body. However, I only really wanted them to come in halfway through the verses, helping to thicken that part of the song. Again, this would help to create more contrast and dynamic movement, building the verse up before the chorus.
With this pruning done, I sat back and listened, and was happy that the various changes in the song were sufficiently defined. However, I now noticed that the intro lasted about twice as long as it should, and needed to build up more so that the song could 'drop' into the verse, again to create more contrast.
The song now felt and sounded better to me, but there was still a major element missing: the bass. Liberally applying EQ to make the bass more prominent was impossible, because the song had been constructed largely around the keyboard and flute parts, so there was no 'leading' bass instrument to tweak. Without this, most songs will always sound rather empty, and it dawned on me that to finish the song to my satisfaction, I'd have to lay down a new bass part. Luckily, I play bass and own a three-quarter-size double bass (a full‑size one stands at over seven feet tall and I don't have the space for that!), so I recorded a basic bass line on to the song, keeping things simple so as not to change the song too much musically, although I changed key for the chorus.
I must also mention a change I made in the choruses. You can hear that the double bass gives the chorus a sense of impact and depth, but to make the chorus complete I needed to support the main vocal somehow, so decided to add some backing vocals. I felt that these needed to present a different texture from Lee's own vocal, and the only person I knew who could just hold a note was me. Yes, that's right, that's my sweet voice doing all the backing in the choruses. Listen to the choruses and you should notice the layering, which was achieved by double-tracking three different lines and placing them so that they echoed what was being sung. Now that the chorus was sounding like a finished piece, it came across as the main hook line, nicely fat and full. Never be scared to experiment with a few backing vocals: they might work, and can make a huge difference to the impact of a chorus. Even if they don't work, you can always get rid of them easily enough.
With the chorus completed, I once again invited Lee back to the studio to listen to the song to approve the changes. The main thing I'd tried to achieve was to make improvements that I thought that Lee would like to have done himself, rather than changing the song to my own tastes such that he wouldn't recognise it. It was, in other words, a remix, not a total rewrite.
With the basic arrangement now sorted, it was time to get on with the business of remixing and mastering. I started by removing any insert processors that weren't providing a musical effect, so EQs, compressors, gates and de‑essers (basically, all dynamics controllers) came out. While I'm on this subject, I've noticed that lots of music makers using software alone tend to apply compression and limiting on tracks after they've been recorded, and often use presets. Although 24‑bit recording means that it's practical to leave dynamics control until the mix, I personally still find it much easier to get the sound right when recording, particularly if the reason you want to apply compression is simply "to make the vocals sound louder and thicker”. More importantly, as Mike Senior explains in depth elsewhere in this issue of SOS, don't be fooled by compressor presets: not only do they not apply to every situation, but compression settings are always dependent on the material you're working on. The threshold will always need to be moved, even if nothing else needs tweaking. So always think why you want to apply compression (or any other dynamics controller) and then get to know how to use a compressor to achieve it. Fortunately, when Lee had recorded his vocals, he'd made good use of a compressor on the input stage, and the resulting vocal dynamics on the recording were good.
For this remix, to keep things simple and get the benefit of software recall, I decided to work purely in the digital domain. DAWs and plug‑ins have improved greatly over the last five or so years, and even though I still tend to prefer and use plenty of nice outboard gear, it's certainly possible now to create a top-quality mix using software alone. That said, one 'cheat' that I used here was Focusrite's Liquid Mix DSP dynamics processor. I say 'cheat' because its emulations of compressors and EQs are based on 'dynamic convolution', which takes multiple impulse responses from real hardware units — and for the money, it does a cracking job! I did all my rearranging in Logic, which was the vehicle for the original project, before importing things into a Pro Tools Session — simply because that's the DAW that I'm most comfortable mixing with.
There are no hard and fast rules to mixing, but I like to start most mixes by getting a decent submix of the drums, which tend to provide the foundation of most tracks in many contemporary music styles. In this case the drum tracks were electronic, so there were no overheads or room mics to contend with, and the challenge was to get the best sound from each individual source.
Step 1: Kick Drum. I turned down and muted all of the tracks, and then un‑muted the bass drum track. The bass drum sounded fine at low frequencies (so there was plenty of energy), but it didn't really have any definition overall, so using Pro Tools' bundled four‑band EQ, I brought up the mids a little (at around 250Hz), to get a slightly sharper tonal quality from the kick.
Step 2: Snare Drum. The trick in this track was to emphasise the snap of the snare drum. I avoided adding too much in the low‑frequency ranges, because that can muddy the contrast between bass and snare drums. If you want a meatier snare sound, this is something you need to be very careful about.
Step3: Hi‑hat. The hi‑hat's dominant frequency was around 1.5kHz, and I tried to ensure that the sound of the hi‑hat was not clashing with the snare. I removed all the bass frequencies on the hats using a high‑pass filter, because there's absolutely no need for bass frequencies on the hats, and removing what little there was brought more clarity, and better contrast between snare and hi‑hat hits. This filtering also left more space in the mix, into which I could fit the toms, using EQ.
Step 4: High & Low Toms. It's easy to over‑EQ the toms in an attempt to make them ring out, but this can make them sound unnatural in the context of the rest of the kit. In this case, I applied only 3dB of gain to the low frequencies around 250Hz, and the mids between about 350 and 450Hz.
With the basic drum levels set, I could move on to the double bass. As this had been recorded in my studio, I'd already taken the liberty of using some outboard compression and EQ at the tracking stage. My aim in this mix was to integrate the bass with the drum submix in such a way that both instruments sounded clearly defined. This can get tricky, because the lower bass notes can clash with the bass drum's fundamental frequencies. If this happens, you'll lose the bass sounds from both instruments at different times (depending on what the bass is playing), and this will ruin the groove of your track. I used EQ to address this, sculpting the bass sound to fit the drums. I boosted the bass's low frequencies by about 3dB at 70Hz, and cut 3dB at around 120Hz. I also applied about 4dB at 275Hz, to bring out the tonal characteristic of the bass. I then un‑muted the drums again, and happily the bass and drums sounded pretty good.
Using Waves' IR1 convolution reverb, I set up a couple of reverb sends ('St. John's Church' and 'Sydney Opera House' impulses), and gave these a little processing, courtesy of the Liquid Mix. Using these sends, I pushed the snare slightly backward in the mix, and applied reverb to the toms and cymbals, but I left the bass drum dry, to keep it nice and tight. I planned to use the same reverbs in different amounts for other elements of the mix.
If the bass feels a little submerged in your mix, simply upping the levels isn't always the best option, because overdoing the low frequencies will eat into your mix headroom. To ensure that the bass in this track could be heard without it being overpowering, I created an approximation of the classic ADT (automatic double tracking) effect, by copying the original bass track and setting it slightly out of time (by 200ms). This gave the bass more punch, and added tonal presence to its overall sound. You could take this further, minutely pitch‑shifting one part up and the other down by a few cents, but in this case, other than using a little EQ, it was already working well enough that I felt I should leave it alone. It's all too easy to over‑process your recordings and apply effects that do little other than turn the mix, slowly but surely, into a muddy mess.
The drums and bass were sounding good and clear when played together, providing a solid foundation on which to build the mix by introducing the other musical elements. The flute (the instrument that had earlier driven me to near‑suicidal despair) was now a pleasure to work with, and slotted neatly into the mix. I mentioned earlier that it dominated the whole track, but with the editing done and the drums and bass now sounding much fuller and clearer, the flute also came across better. The more sparse arrangement of flute and drone sounds also meant that the keyboard stabs stood out better, giving the song its musical hook.
With most of the music now in place, I unmuted the vocal channels and panned them roughly to where I thought they'd fit. I believe that vocals should always have clarity and volume, as they so often need to carry the song, so for the main vocals I first decided to remove the existing reverb and delays. This changed the groove a little, and was something that I'd need to get approved, but I felt that it was worth it for the sake of clarity. I then used an Amek EQ emulation on the Liquid Mix, to help fit the vocal parts nicely in the mix. They needed only minor tweaking to allow them to cut through the mix with presence, so all I did was lift one higher frequency (5kHz) by 1dB and remove 2dB at 200Hz and 400Hz. This mid‑frequency dip helped the vocals cut through the mix and ensured that they did not blur with the bass and drums, which needed to be emphasised at these frequencies. The rest of the backing vocals fell in to place quite quickly — it was just a matter of getting the level balance right.
My mission was nearing the end, and placing the last sounds in the mix was a doddle, as my earlier work had created space for them. I listened back to the mix and thought it sounded great, but to finish things off I went back to the reverb sends I'd used earlier. I sent some of the vocals and keyboard stabs to each reverb in different amounts. Combining different reverb send effects on each instrument like this is a great and simple way to achieve spatial separation between sounds. With that done, I sat back and passed a copy to Lee for his thoughts.
Lots of people over-process sounds, using too much EQ, too much compression and too many effects. If there's anything you can learn from this article, it's to remember not to apply such processing or effects without a good reason — and it's usually a case of 'less is more'. It's also important to record all your audio at an adequate level, and although 24-bit recording means you can now get away without compression during tracking, I still prefer to work that way, because it makes life that much easier when it comes to the mix.
I'd welcome your feedback on this mix, and you can always contact me through Cygnus Music Ltd at www.cygnusmusic.net. Thanks to Lee Knickenburg and Cygnus for allowing us to use and publish this track on the SOS web site.
Lee: "When I first brought my track to Luke, I thought it was finished... but having his capable ears listen through proved that a second opinion can quickly change any preconceptions! Stripping it back to the basics and rebuilding it was a major learning experience, and I can't stress enough the importance of allowing the track to breathe and letting parts shine through. The dynamics are much better now, and the song moves in a more cohesive manner. The track has real punch, and each part is clear, but the initial vibe is still there and seems stronger! Luke has definitely brought the best out of a track that, looking back, was a little cluttered and repetitive.”