The most important thing that you need to get right is the balance between the length of the reverb and its overall level across all the tracks in the mix. Most people who send mixes in for Mix Rescue tend to have misjudged this balance, either by having the reverb too long, so that they can't fade it up far enough without it washing out the whole mix, or by having it too short, so that they can't get a full sound without distancing their tracks to the horizon. Almost every reverb processor has some kind of control to change the length of the reverb (often labelled Decay Time or Reverb Time), so one of the most important things you can do is to experiment with different reverb length settings, juggling the return channel's fader in tandem, to find the best balance between these two parameters. In fact, this is something I often find myself coming back to late in the mix, as it can be difficult to judge properly until the comparative reverb levels for all the instruments are set up.
If you have a listen to the ReverbLength audio files you can hear how the length of a single reverb can affect the fullness of the mix, given a fixed effect-return level. ReverbLengthShort leaves the mix a bit lacking in warmth, while at the other extreme ReverbLengthLong goes over the top, swamping the details in the mix and giving itself away as an unnatural effect. ReverbLengthMedium strikes a balance between these two extremes and therefore sounds more successful in context. I've also created a file of the same section with the reverb bypassed so that you can hear how it's contributing to the song's blend. (Incidentally, If you're having difficulty initially distinguishing the differences between the different sets of audio files, try importing all of them into your own sequencer so that they're all playing back at once, and then use your mixer solo buttons to switch between them while they're playing. This makes subtle adjustments between the files much more apparent.)
The next most common thing I do with any reverb is adjust its tonality to suit the track. Some equalisation controls are often built into the reverb processor, but I usually prefer the extra flexibility afforded by a separate equaliser following the reverb in the return channel. With modern commercial styles I almost always cut away some of the low frequencies with a high-pass filter set somewhere in the 100-300Hz range, simply because it allows me to keep the required focus and punch of kick drums and bass lines uncompromised. I also often cut high frequencies as well, either with a low-pass filter or high shelf. This is partly because it helps make the reverb less audible as an added effect (particularly in response to vocal consonants and high percussion), but also partly because it has the psychological effect of making the reverb seem further away from the listener than the brighter dry sounds.
However, in addition to cutting high end and low end, it can also make a great deal of sense to sculpt the reverb return's tonality even further if you find that it's colouring the mix's overall tone undesirably. Another reason for doing this is that the fashion these days is for reverb to be pretty inconspicuous, but it still needs to be high enough in level to get the instruments to gel properly. If your reverb has a prominent frequency-response peak where little else is happening in your mix, this will make the reverb effect too audible well before the overall reverb level is high enough. A few well-placed reverb-return EQ cuts once the mix is up and running can therefore really pay dividends if you're after an up-front production sound that is nevertheless still cohesive.
To demonstrate the impact of these kinds of EQ changes, I used the same reverb effect from the previous example, on which I'd already used all the types of EQ I've been talking about, to create the ReverbEQFull audio file. I then dropped out each of the three filter bands (a high-pass filter at 240Hz, a very gentle low-pass filter rolling off from around 7kHz, and a 5dB peaking cut over a one-octave band at 580Hz) to generate the ReverbEQHPFOut, ReverbEQLPFOut, and ReverbEQPeakCutOut files.
The final reverb parameter that I regularly reach for is the pre-delay setting, which simply delays the onset of the reverb reflections by a specified amount — the longer the pre-delay, the closer the dry sounds appear to be in comparison with the boundaries of the simulated room. Some reverb plug-ins either have no pre-delay option or have a zero default setting, and if left unchanged this psychologically positions any sound source much further away from the listener, effectively right against one of the boundaries of the simulated room. This isn't the only problem, though, because what also happens is that the almost instantaneous early reflections of a reverb without pre-delay interact unpredictably with the dry sound in a way that can noticeably alter its tone. An immediate reverb onset can interfere with vocal intelligibility too, by blurring important consonants. Again, Mix Rescue candidates regularly encounter all these difficulties, simply because they ignore the pre-delay setting — and even if your reverb has no internal pre-delay, that's no excuse not to dial one in manually by chaining delay and reverb effects in series.
To hear how audible these factors are in practice, try comparing the ReverbPredelayIn and ReverbPredelayOut files. The first of these uses a 35ms pre-delay, while the second has none at all. To my ears, the vocalist takes a clear step backwards when the pre-delay is bypassed, and sounds less clear into the bargain.
The Long & The Short Of It
On the face of it, if you're trying to get your tracks to sound as though they're all roughly in the same space, sending to a single global reverb from all of them is a common-sense approach. However, in my experience this puts a lot more pressure on the engineer to select and tweak that single reverb to get respectable results, so I usually suggest to those starting out that it's actually easier if they use two. Let me explain how this works.
The idea is that the two reverbs each serve different purposes, and they can be mixed and matched to cope with a range of recording types within most typical projects. The first reverb is short (usually well under a second in length) and with perhaps only 5-10ms of pre-delay. What this does is simply make disconnected sounds stick together more convincingly within the mix, as well as setting the distance between these sounds and the listener, but without making itself obviously audible as an added effect, given its minimal reverb 'tail'. (As a result, some engineers call this effect ambience rather than reverb.) The second reverb can then be set to give much more of a sense of an acoustic space, using a longer and perhaps slightly brighter reverb as required, but combining that with a fairly long pre-delay (maybe 30-70ms), to avoid the effect distancing sounds that it's applied to very much further.
Having these two reverbs on hand, you can then deal with a variety of different situations. For example, a bone-dry synthesizer track that belongs in the track's background might need lots of short reverb to push it away from the listener, whereas a lead vocal might only have just enough to make it sound as if it belongs in the mix — indeed, it might have none at all if you want to achieve the most upfront sound, albeit at the risk of it sounding disconnected from the record as a whole. Both of the tracks may need a bit of the longer reverb, though, if you're trying to make them sound natural together.
To take another example, drum overhead mics that already have a lot of room sound on them could easily warrant no added reverb at all (although you might try to match the sound of the longer reverb to them somewhat, to get other, drier sounds to work alongside), but some of the accompanying close mics may benefit from some of the shorter variety, to avoid them advancing too far forward in the mix perspective. A retriggered drum sample, however, would probably need both reverbs, carefully applied, to blend it convincingly with the rest of the kit: the shorter reverb would primarily set its distance, while the longer could help to retrospectively teleport it into the original recording room.
Clearly, both of your reverbs will need to be tweaked to suit the track, but as long as you stay focused on their respective purposes you shouldn't go too far wrong. If the shorter reverb can't blend tracks together before it becomes too audible, try further reducing its reverb time or dial in some EQ cuts on the reverb return channel. If the longer reverb is making things sound distant, you can take off some high end to push it further into the background; if it's making things sound woolly, adjust that level/length ratio or crank up the return channel's high-pass filter.
To demonstrate this dual-reverb setup in practice, I've done a rough mix of the chorus of my little demo project using just these two effects, to create the DualReverbsBoth file, but I have also then made two further files (DualReverbsShortOnly and DualReverbsLongOnly) designed to show what each effect is contributing in isolation. The DualReverbsDry file, as you might expect, is the same section without any reverb at all, just for reference.
How Much, On Which Instruments?
Many people eschew reverb on kick and bass entirely, to avoid low-end mush, but there's no need to be hard-and-fast about this and risk throwing the baby out with the bath water. If a little reverb helps to achieve a more satisfactory blend of these instruments with the rest of the track, it's no bad thing, just as long as you keep any undesirable bass overhang in check with your return-channel EQ.
One type of track that I rarely put reverb on, though, is any background synth pad I might be using. These invariably sink into the track without any extra help, and because they already tend to create a homogeneous blanket of sound, even large amounts of reverb will seldom be noticeable — all the reverb does is make the part sound behind the beat! If a synth pad is too far forward in the mix, a more effective recourse is to just shelve off a bit of high end with EQ.
Probably the most important thing to say about using any reverb, though, is that it's not a bad idea to err on the side of using too little in general, particularly if you anticipate using mastering-style dynamics processing on the final mix at a later stage, because this will tend to increase the levels of mix details such as reverb tails. In a lot of cases where an obviously reverberant sound isn't required, it's quite a useful little rule of thumb to set levels so that the reverb only really draws attention to itself if you mute the reverb return — that way you can be pretty sure it's only supporting, rather than overwhelming, the dry tracks.
When you're trying to finesse reverb levels in your track, another handy trick suggested by top engineers such as Geoff Emerick and Alan Parsons is temporarily to mute the most prominent tracks in the mix, typically drums, bass, and/or lead vocal. This lets you hear more clearly whether there are any less-than-ideal reverb balances amongst the background instrumentation.
Blending Your Mix With Delay Effects Instead
Delay is a much simpler effect than reverb; most of the time you can pretty much set the delay time (the distance between the echoes) and the feedback level (the number of echoes) and you're off! Perhaps it's because of this simplicity that so many musicians ignore it when it comes to the mix, or maybe it's just that they don't want any kind of distracting 'echo effect'. This is a shame, however, because delay is almost as useful as reverb when it comes to gluing instruments together at mixdown. In fact, in some senses delay is superior to reverb for many dry-sounding modern styles, as it can achieve cohesion without any obvious reverb tail, leaving individual sounds more upfront, distinct, and raw-sounding. It also tends to leave the mix sound much clearer, because it doesn't fill up all the gaps in the stereo field in the way reverb tends to.
Although there are many uses for mono delays, if you're looking to use delay effects for subtle blending tasks, you'll get the most transparent results if you use them in stereo, such that if you send any panned or stereo tracks in your arrangement to the delay channel, the echoes return with the same stereo positioning and spread. This can be fiddly to manage in hardware setups, even if you have a stereo delay unit, as very few mixers have stereo sends, so you may need to carefully juggle the levels of two separate send controls per channel to maintain the correct stereo image in the delay return. In computer systems, things are usually a little easier, although it pays to check that it's all working as you hope. I was using an Apple Logic Pro system very recently, for example, and could find no straightforward way of panning the send from a mono track across the channels of a stereo delay effect — although whether this problem was my fault or the software's is anyone's guess!
Again, you could use only a single delay effect to pull your mix together, but I've found that setting up a couple of contrasting effects actually makes it easier to get results quickly: the first is a short 'slapback' delay, with 50-100ms delay time and zero feedback; and the second is a longer delay with some feedback and a delay time synchronised to the song's tempo. The short delay operates much like the short reverb I've already discussed, although you need to be careful feeding high levels of percussive sounds to it, as these can begin to sound as though they're flamming, interfering with the track's rhythmic pulse. Because the longer delay is tempo synchronised, it tucks itself into the mix in a very transparent way, creating the same kind of warmth and sustain as the longer of my two reverbs, but without adding much in the way of a sense of real space.
Much like the reverbs, delays also benefit from judicious EQ'ing to suit the track in hand. Fortunately, exactly the same principles apply when massaging the EQ settings into shape for either effect, so there's nothing more that needs to be said except that you should give it the time it needs. To illustrate the different flavour that delays offer compared to reverb, I've again set up a similar set of rough mixes of my demo song's chorus, but this time using the two delay effects: they're the ones with the DualDelay file names.
Bringing It All Together
Many professional mix engineers have a selection of standard effects that they set up before they even start to mix, having learnt from experience which units and algorithms have reliably delivered the results they need on project after project. The four specific effects that I've described here are designed to be fairly all-purpose in this way, and can be used in tandem to tie together mixes in a variety of different styles. For example, if your recording is already fairly live-sounding and mostly needs 'gluing together', the shorter reverb and delay can come to the fore, whereas the longer variants can be faded up when more space or sustain are required to enliven something like a heavily-overdubbed pop or electronica record. Alternatively, the delays might take precedence where a track already has the necessary spatial character (or just needs to sound very upfront), but lacks a satisfying resonance and fullness. To show how these four effects can fit together, I've mixed that demo chorus section one more time using all four effects at once, to create the AllEffectsWet file, and you can use AllEffectsDry to get a perspective on how much difference they're making.
There remains one final very important point to make, though: the effects levels on most modern records don't stay static throughout, but actually adapt to suit changes in the arrangement and underscore the long-term ebb and flow of the mix. That means you shouldn't expect the reverb and delay balance in your choruses necessarily to translate directly onto a contrasting verse arrangement. To illustrate this, I've created three longer audio files of the demo song (the Automation files), taking in the first verse as well as the chorus. For AutomationInactive I've just kept the same effect settings I used in the chorus, and you can hear that the effect levels become a bit overbearing in the sparser arrangement. However, just fading down the levels of the two delays and the longer reverb during the verse using mixer automation easily sorts out this problem for the AutomationActive file, with the beneficial side-effect that as the effects levels fade back up for the chorus section it makes the sound a bit more expansive and 'widescreen'. Again, compare these two audio files with the effectless AutomationDry for an idea of what the effects are adding overall.
Room For Improvement
Even with audio files to demonstrate some of the factors involved, there's a limit to how far you can improve your mixing technique without giving it some practice, so if you've found the application of reverbs and delays to be a bit of a black art so far, try out the four-effect setup I've advocated above on a few of your own tracks. With a little experimentation, you'll find that your ears will begin to attune themselves to the more subtle characteristics of your mix, so that you can balance the effects levels for each track much more successfully.
Many thanks to Sarah Richardson for performing the vocals for this article's demonstration mix.
Next Month: Top Producers Talk Reverb
While the reverb techniques introduced here are a good place to start for straight-ahead mixing tasks, both natural and artificial reverbs play a much larger role in most professional productions in practice, having as much impact on the recording process as on the mixdown. So if this article has left you hungry for more, stay tuned for next month's in-depth special feature , in which I'll be comparing insider tips and tricks from more than 70 of the world's top producers. Just don't expect them all to agree with each other...