Solutions to mixing problems can sometimes be found in your arrangement...
When Roger Wessel contacted me about a mix he was working on, his primary frustrations were the low end and the lead vocal sound, both crucial elements of the lush, electronica-influenced singer-songwriter project he was working on for his daughter Dorothea. He also sought advice on how to maximise the 'lift' from the song's contemplative verses to its more powerful rhythm-driven choruses. Listening to Roger's mix-in-progress, however, I felt that there could also be more textural detail and atmosphere surrounding them, and that the chorus's backing-vocal arrangement could perhaps better maintain the listener's interest during the comparatively long gaps between lead-vocal phrases.
This is the mix of 'Muddy Water' that Roger originally sent in to Sound On Sound, asking for advice about the low end, vocal sound, and long-term mix dynamics.
Inverting The Low-end Roles
Auditioning Roger's raw tracks in Reaper, I quickly discovered some reasons for these low-end difficulties. Firstly, the low-frequency tail of the chorus's kick drum dominated below 60Hz, leaving the main bass synth's root-note fundamental scarcely audible. This kick-bass relationship can work fine for more heavily beat-driven arrangements, but the lack of a solid foundation to the musical harmony felt less suitable here. In addition, the bass synth's main low-end component was pulsing in a rather laid-back pattern that didn't seem to drive the groove forward, and its lack of sustain left the verses sounding harmonically deeper and richer by comparison, thereby undermining the verse-chorus dynamics Roger was hoping for. So my first decision was to substitute the original bass synth's fundamental frequency with a dedicated subsynth patch, delivering a much more sustained sense of low-end power. To clear room for this, I then reined in the kick drum's extreme low end by combining a sub-50Hz EQ roll-off with some gating to tighten up the rumbling sustain.
Another reason for the low-end struggles was a second bass part during that same section: a repeating rhythmic bass guitar riff an octave above the synth. In its raw form, this track's strongest energy came from its fundamental frequency (around 90Hz), with very little mid-range definition above its first harmonic. As such, it quickly created a muddy-sounding frequency build-up in conjunction with the bass synth's strong first and second harmonics. By cutting those troublesome lower partials and radically enhancing the instrument's mid-range using assertive EQ boosts and a mid-focused parallel distortion channel (based around one of my favourite recent freeware discoveries, Creative Intent's Temper plug-in), I was able to give the instrument a more meaningful rhythmic role in the balance.
When you're trying to capture an intimate lead-vocal sound, it's tempting to put the mic right up close to the singer's mouth, because a typical large-diaphragm cardioid design will immediately warm up the tone with its proximity-effect bass boost and will enhance the sense of breathy detail with its on-axis high-frequency boost. At mixdown, though, these apparent enhancements can actually be problematic: the degree of proximity effect can vary a great deal as the performer moves, causing timbral inconsistency, and a close mic position right by the singer's mouth will usually overemphasise sibilance, such that the vocal seems too bright (on account of the consonants) well before the vowels feel airy enough. In addition, though, enhanced frequency extremes often conceal a lack of mid-range energy, which makes it difficult to keep the vocal upfront in a busy mix — by the time you've faded the singer up far enough to be heard, either the low end is making the backing sound weedy, or the high end is making you wince!
And these were exactly the issues that had caused Roger so much trouble while trying to mix Dorothea's raw vocal recording. Fortunately, the proximity-effect variations were only moderate, so I was able to even those out fairly successfully by compressing with a low-frequency dynamic EQ shelf. A parallel distortion channel, using Klanghelm's excellent freeware IVGI2, provided another quick win in the mid-range department, both boosting and thickening the singer's mid-range spectrum.
However, high-end consonants were overwhelming the more desirable upper-spectrum frequencies, and after trying several different de-essing tactics with mixed success I reluctantly decided to bring in the nuclear option: manually editing all the vocal consonants onto their own track for independent processing.
I say 'reluctantly', because it's a finicky job to get the editing points and crossfades in the right place in each instance, but I figured it was worth biting the bullet here for two big reasons: firstly, I'd be free to process the consonants as brutally as I wanted without impacting on the rest of the vocal sound; and, secondly, I'd be able to feed a variety of vocal send effects (another prominent feature of this production) without worrying about their impact on the consonants. Lyric intelligibility can suffer if consonants are bouncing around in complex echo tails, for instance, and sibilants can get distractingly splashy if they feed stereo wideners or long reverbs.
With the editing completed, I first tried to bring the consonants under control. It seemed that the mic had emphasised the upper octave of the spectrum specifically, which gave many of the sibilants a needling kind of 'whistle', so I used low-pass filtering to roll those off fairly steeply above 10kHz, as well as boosting 3dB around 5kHz to refocus the consonant energy more into the upper mid-range. Turning back to the main body of the vocal sound now, it became a lot simpler to boost the upper spectrum with high-frequency EQ, and I improved the HF consistency as well by compressing with another dynamic EQ shelf. Again, though, I found that the top spectral octave began to get unpleasantly crispy and shrill before the octave below it felt sufficiently robust, so I applied a progressive roll-off above 10kHz there too.
While some slow-attack compression from Sonimus TuCo provided a little general level control, most of the real vocal-balancing work was done with the automation. But not only the usual slew of detailed fader rides — there was also fader and EQ automation on the consonants channel, as well as some extra upper-mid-range EQ boost to combat additional frequency masking during the choruses. Automation was critical to the success of the vocal effects too. I think one of the reasons mixing students often find creating reverb and delay patches tough is that they're trying to achieve the frequently impossible task of finding a single setting that suits the entire song. If you're willing to ride your effects levels adaptively in response to changes in the arrangement, the exact parameter settings you use become less critical, in my experience.
Long-tail Vocal Effects
So, for instance, the main long-tail effect I used on the chorus lead vocal was just a pretty random collection of delay and reverb effects (GVST GRevDly, Cockos ReaDelay, Lexicon Random Hall, and Dead Duck Delay) chained together to create a trippy-sounding wash that I liked on a purely subjective level. Honestly, I didn't sweat bullets over all the little internal parameters, yet I was nonetheless able to fit this wacky effect into the mix using some return-channel EQ and by riding the effect-send level for different sections. Had I used the chorus's send level for the verse, say, the mix would have felt swamped in echoes, and I'd probably have tried to reduce the delay feedback and reverb decay — at which point that effect wouldn't have functioned as well for the choruses!
That said, I did use one sneaky little mixing trick that really comes into its own when you want to keep a lead vocal sounding upfront, despite copious long-tail effects. This involved inserting a ducker on that main delay/reverb effect's return channel and triggering its side-chain with the dry lead-vocal. This caused the effect level to reduce a little whenever the lead vocal was singing, making it sound a little closer and clearer, but without diminishing the nice complex effect tail following each vocal phrase.
My other lead-vocal effects included a basic chamber setting from Lexicon's PCM Native bundle, a four-tap 3/16 tempo-sync'ed ping-pong delay, and a touch of Harmonizer-style stereo-widening — each return channel having its own EQ and automation to slot it into the mix. Moreover, automating these effects levels allowed me to support the section dynamics, for example by giving the vocal progressively more reverb for each verse, but then heightening the comparative 'size' of each chorus entry by drying things up during the few bars preceding it.
If you're willing to ride your effects levels adaptively in response to changes in the arrangement, the exact parameter settings you use become less critical...
Another little secret for achieving an apparently effects-laden production sound without too much clutter is to use lots of different distinct effects, rather than just using a trowel to apply high levels of a small number of effects! This is a particularly powerful tactic in projects with lots of backing-vocal details. You see, it's much easier to fit some ludicrously extended evolving multi-effect into your mix if it's only applied to an incidental vocal snippet, and those kinds of ear-catching 'effects spins' also serve to draw attention to small vocal-arrangement details that might otherwise be missed — so it's a win-win situation. In this remix, for instance, I used more than a dozen different vocal effects, often combining a feedback delay with other effects such as reverb, chorus, modulated filtering, tremolo, auto-panning, and pitch-shifting. Most of those only appear for a few moments, though, so each still has the elbow room to make an impression, both highlighting the vocal part it's applied to and contributing to the general sense of spaciousness and sonic nuance.