If you've ever spent hours mixing only to be confronted with a wall of mud, you might need to think harder about how to use reverb and delay in your mixes - and some simple tricks can yield dramatic results.
Reverb and delay are arguably the most common effects used at mixdown, but because they find so many different uses in this context they can seem bewildering to musicians who are still in the process of getting to grips with the fundamentals of studio production — those setting up a home studio, perhaps, or those enrolled in a music technology course for the first time.
Part of the problem inevitably stems from the wide range of different devices available that can supply these types of effects, and from their frequently inscrutable editing parameters. There is, however, an enormous amount of information on hand to demystify such technicalities, not least in Sound On Sound 's on-line article archive (see the 'Further Reading' box for some suggestions). Furthermore, the preset-led nature of many effects units these days makes it unnecessary for the beginner to delve very far into their algorithmic innards, and to be honest I think there are quicker results to be gained at the outset by working from presets, and leaving most of the effects parameters well alone!
As I see it, the more pressing difficulty when starting out is dealing with basic practical questions such as how many different effects to use, which effects to apply to which instrument, and how to decide on suitable levels. So in this article I'll be trying to eliminate some of the guesswork by suggesting a basic general-purpose approach to using reverb and delay while mixing. In the process I'll pinpoint some things to watch out for when surfing reverb presets, as well as highlighting the handful of effects parameters and techniques that make the biggest impact with the least effort.
However, I've always felt that there's only so much you can communicate in print alone when you're dealing with mixing techniques, so I've also put together a bunch of audio examples so that you can judge for yourself how useful each of my proposed methods is in practice. You can download them in MP3 or WAV format from the SOS web site at www.soundonsound.com/sos/ jul08/articles/reverb1audio.htm.
On the most fundamental level, both delay and reverb are about adding the characteristics of an acoustic environment, either by creating simple echoes or by simulating more complex patterns of sonic reflections. The reason these effects are usually so important at mixdown is because the individual parts in most modern multitrack projects communicate very little in the way of a common sense of space, and as such sound a bit 'dislocated', rather than seeming to belong on the same record. Obviously, synthesizers and sampled sounds often have no sense of acoustic realism to them at all, but even miked instruments are often recorded very close up, to reduce room reflections as much as possible, allowing decisions about the nature of the production's overall acoustic space to be deferred until the final mixdown.
For this reason, the primary objective of reverbs and delays is to reconnect tracks that have no inherent connection by giving them some shared acoustic characteristics, and it's this task that's the subject of the article at hand. Naturally, there are creative applications of reverbs and delays too, but these are window-dressing in most mixes (as well as being very much more a matter of personal taste), and will do your mix little good if the main edifice doesn't really cohere properly.
Because the underlying aim is to give the separate tracks something in common, it makes sense to set up your plug-ins or hardware processors so that the same effect can be applied to multiple tracks at the same time. Whether you're using a hardware or a software mixer, the manner of doing this is pretty much the same. First of all you set up the output of your effects processor to feed a spare stereo mixer channel; then you set up a separate mix of your tracks specifically for the effects processor and send it to the unit's inputs, whereupon the effected signal feeds back into the mix. By changing the levels of the different tracks in the send mix, you determine how much of the effect is added to each track.
In hardware systems, the mixer will have auxiliary send controls which allow you to create a number of independent effects-send mixes, each mix appearing on a separate output socket. By connecting different effects units to the mixer's different auxiliary-send outputs, you can drive several independent effects at once, assuming that you have enough free mixer inputs through which to return their outputs. In software, a separate mixer channel usually needs to be created to hold the effect plug-in, whereupon auxiliary sends can be created on each relevant mixer channel to feed it. (This is exactly the kind of setup I used in Cockos Reaper to create my audio examples, using a section of an otherwise dry multitrack project.)
Irrespective of which kind of system you work in, though, there are two important things that you need to bear in mind if you're going to ensure that this kind of effect configuration (usually called a 'send effect' or 'effect loop') works properly. The first thing is that you need to make sure the processors or plug-ins you're using only output effects, not a mix of processed and unprocessed signals, otherwise changing any auxiliary send level will also have an impact on that track's overall volume. Some effects units have separate level settings for effected ('wet') and uneffected ('dry') signals, while others offer a Mix Balance control, which needs to be set to one extreme (usually labelled something like '100 percent wet') to stop any unprocessed signal breaking through.
The second thing to ensure is that each channel's auxiliary send is taken from a point in the signal path after the channel's fader — in other words, that you use what is called a 'post-fade' auxiliary send. That way the amount of effects for any instrument will vary naturally as its channel fader is moved. If you fed the auxiliary send from before the fader, then you could, for example, fade a track completely down and you'd still be hearing its reverb — rarely a desirable state of affairs except for the occasional special effect.
As usual, we've placed a number of audio examples discussed in this article on-line at www.soundonsound.com/sos/jul08/articles/reverb1audio.htm
Now that we're clear on how to set up the necessary connections, it's time to start considering the effects themselves. First of all, let's start by looking at how you can use just reverb to draw a final mix together, and then we'll build on that to show the subtly different possibilities that are afforded by delay effects.
As I've already mentioned, there are enough things for newcomers to mixing to worry about without programming their own reverbs from scratch, so I would certainly recommend starting from presets where possible. However, this tactic only lets you off the hook to a certain degree, because it's still up to you to select the right processor and preset for the task. Here are a few tips.
The first, and probably most useful, thing I can say is that you should ignore the preset names and instead try to imagine the kind of space you want your mix to inhabit — picturing a real environment can help focus the mind here, although this may not help as much if you're trying to create a more other-worldly sound. A wrong choice in this regard can be almost impossible to sort out during mixing, whereas a reverb with the right kind of inherent acoustic signature but the wrong tone and/or length can usually be tweaked into better shape comparatively easily. It's not uncommon for me to wade through a couple of dozen presets before I find one that instinctively feels like it fits the mix in hand, and it's vital that you don't hurry this process.
Beyond that rather intangible decision, though, there are a few other more down-to-earth things to consider. First of all, if you have a choice of reverb processors or plug-ins, be wary of any that produce a metallic sort of sound, particularly in response to noisy tracks like drums. To show what I mean by this, let me turn to the first of my audio examples: the Reverb1 and Reverb2 audio files. The former has a pronounced metallic ring to it, whereas the latter (while still far from perfect) is a bit better behaved in this regard, and is likely to prove much more usable. The problem with metallic resonances is that, by the time the reverb is at a level where it's doing its job, the overtones become too clearly audible, unpleasantly colouring the mix as a whole and making the effect sound too obvious. Reverbs with obvious resonant 'character' do have their uses at the mix, but typically for other, more specialised tasks beyond the scope of this article, so it's best to steer clear of them to begin with. (It's worth pointing out that the Reverb1 file also veers off to one side of the stereo image as it decays, which isn't ideal either.)
Another basic principle when looking for reverbs that will bind a mix together is to tread carefully with any that seem to have very prominent frequency extremes. Neither very high frequencies nor very low frequencies are much use when using reverb to bind a track together, the former tending to make the reverb too audible in its own right, and the latter reducing punch at the low end of the mix where definition is normally really important.
If you find that you're struggling with effects-related jargon, or you just want to know a bit more about different reverb types and plug-in parameters, head over to the SOS web site at www.soundonsound.com and check out the extensive article archive, which has thousands of free-to-view articles for you to browse. You can search the articles yourself, using the search function, but if you're short on time then here are a few of the most useful on the subject of reverb and delay:
If you're lucky, you might have selected a reverb preset that's perfect for your track. In my experience, though, no preset ever seems to fit the mix like a glove, and I routinely tweak the reverb sound in a variety of ways while mixing, to make it match better. What I also find is that amongst the forest of reverb parameters frequently provided, some end up being much more useful than others, so here are a few pointers for getting the quickest results.
The most important thing that you need to get right is the balance between the length of the reverb and its overall level across all the tracks in the mix. Most people who send mixes in for Mix Rescue tend to have misjudged this balance, either by having the reverb too long, so that they can't fade it up far enough without it washing out the whole mix, or by having it too short, so that they can't get a full sound without distancing their tracks to the horizon. Almost every reverb processor has some kind of control to change the length of the reverb (often labelled Decay Time or Reverb Time), so one of the most important things you can do is to experiment with different reverb length settings, juggling the return channel's fader in tandem, to find the best balance between these two parameters. In fact, this is something I often find myself coming back to late in the mix, as it can be difficult to judge properly until the comparative reverb levels for all the instruments are set up.
If you have a listen to the ReverbLength audio files you can hear how the length of a single reverb can affect the fullness of the mix, given a fixed effect-return level. ReverbLengthShort leaves the mix a bit lacking in warmth, while at the other extreme ReverbLengthLong goes over the top, swamping the details in the mix and giving itself away as an unnatural effect. ReverbLengthMedium strikes a balance between these two extremes and therefore sounds more successful in context. I've also created a file of the same section with the reverb bypassed so that you can hear how it's contributing to the song's blend. (Incidentally, If you're having difficulty initially distinguishing the differences between the different sets of audio files, try importing all of them into your own sequencer so that they're all playing back at once, and then use your mixer solo buttons to switch between them while they're playing. This makes subtle adjustments between the files much more apparent.)
The next most common thing I do with any reverb is adjust its tonality to suit the track. Some equalisation controls are often built into the reverb processor, but I usually prefer the extra flexibility afforded by a separate equaliser following the reverb in the return channel. With modern commercial styles I almost always cut away some of the low frequencies with a high-pass filter set somewhere in the 100-300Hz range, simply because it allows me to keep the required focus and punch of kick drums and bass lines uncompromised. I also often cut high frequencies as well, either with a low-pass filter or high shelf. This is partly because it helps make the reverb less audible as an added effect (particularly in response to vocal consonants and high percussion), but also partly because it has the psychological effect of making the reverb seem further away from the listener than the brighter dry sounds.
However, in addition to cutting high end and low end, it can also make a great deal of sense to sculpt the reverb return's tonality even further if you find that it's colouring the mix's overall tone undesirably. Another reason for doing this is that the fashion these days is for reverb to be pretty inconspicuous, but it still needs to be high enough in level to get the instruments to gel properly. If your reverb has a prominent frequency-response peak where little else is happening in your mix, this will make the reverb effect too audible well before the overall reverb level is high enough. A few well-placed reverb-return EQ cuts once the mix is up and running can therefore really pay dividends if you're after an up-front production sound that is nevertheless still cohesive.
To demonstrate the impact of these kinds of EQ changes, I used the same reverb effect from the previous example, on which I'd already used all the types of EQ I've been talking about, to create the ReverbEQFull audio file. I then dropped out each of the three filter bands (a high-pass filter at 240Hz, a very gentle low-pass filter rolling off from around 7kHz, and a 5dB peaking cut over a one-octave band at 580Hz) to generate the ReverbEQHPFOut, ReverbEQLPFOut, and ReverbEQPeakCutOut files.
The final reverb parameter that I regularly reach for is the pre-delay setting, which simply delays the onset of the reverb reflections by a specified amount — the longer the pre-delay, the closer the dry sounds appear to be in comparison with the boundaries of the simulated room. Some reverb plug-ins either have no pre-delay option or have a zero default setting, and if left unchanged this psychologically positions any sound source much further away from the listener, effectively right against one of the boundaries of the simulated room. This isn't the only problem, though, because what also happens is that the almost instantaneous early reflections of a reverb without pre-delay interact unpredictably with the dry sound in a way that can noticeably alter its tone. An immediate reverb onset can interfere with vocal intelligibility too, by blurring important consonants. Again, Mix Rescue candidates regularly encounter all these difficulties, simply because they ignore the pre-delay setting — and even if your reverb has no internal pre-delay, that's no excuse not to dial one in manually by chaining delay and reverb effects in series.
To hear how audible these factors are in practice, try comparing the ReverbPredelayIn and ReverbPredelayOut files. The first of these uses a 35ms pre-delay, while the second has none at all. To my ears, the vocalist takes a clear step backwards when the pre-delay is bypassed, and sounds less clear into the bargain.
On the face of it, if you're trying to get your tracks to sound as though they're all roughly in the same space, sending to a single global reverb from all of them is a common-sense approach. However, in my experience this puts a lot more pressure on the engineer to select and tweak that single reverb to get respectable results, so I usually suggest to those starting out that it's actually easier if they use two. Let me explain how this works.
The idea is that the two reverbs each serve different purposes, and they can be mixed and matched to cope with a range of recording types within most typical projects. The first reverb is short (usually well under a second in length) and with perhaps only 5-10ms of pre-delay. What this does is simply make disconnected sounds stick together more convincingly within the mix, as well as setting the distance between these sounds and the listener, but without making itself obviously audible as an added effect, given its minimal reverb 'tail'. (As a result, some engineers call this effect ambience rather than reverb.) The second reverb can then be set to give much more of a sense of an acoustic space, using a longer and perhaps slightly brighter reverb as required, but combining that with a fairly long pre-delay (maybe 30-70ms), to avoid the effect distancing sounds that it's applied to very much further.
Having these two reverbs on hand, you can then deal with a variety of different situations. For example, a bone-dry synthesizer track that belongs in the track's background might need lots of short reverb to push it away from the listener, whereas a lead vocal might only have just enough to make it sound as if it belongs in the mix — indeed, it might have none at all if you want to achieve the most upfront sound, albeit at the risk of it sounding disconnected from the record as a whole. Both of the tracks may need a bit of the longer reverb, though, if you're trying to make them sound natural together.
To take another example, drum overhead mics that already have a lot of room sound on them could easily warrant no added reverb at all (although you might try to match the sound of the longer reverb to them somewhat, to get other, drier sounds to work alongside), but some of the accompanying close mics may benefit from some of the shorter variety, to avoid them advancing too far forward in the mix perspective. A retriggered drum sample, however, would probably need both reverbs, carefully applied, to blend it convincingly with the rest of the kit: the shorter reverb would primarily set its distance, while the longer could help to retrospectively teleport it into the original recording room.
Clearly, both of your reverbs will need to be tweaked to suit the track, but as long as you stay focused on their respective purposes you shouldn't go too far wrong. If the shorter reverb can't blend tracks together before it becomes too audible, try further reducing its reverb time or dial in some EQ cuts on the reverb return channel. If the longer reverb is making things sound distant, you can take off some high end to push it further into the background; if it's making things sound woolly, adjust that level/length ratio or crank up the return channel's high-pass filter.
To demonstrate this dual-reverb setup in practice, I've done a rough mix of the chorus of my little demo project using just these two effects, to create the DualReverbsBoth file, but I have also then made two further files (DualReverbsShortOnly and DualReverbsLongOnly) designed to show what each effect is contributing in isolation. The DualReverbsDry file, as you might expect, is the same section without any reverb at all, just for reference.
Many people eschew reverb on kick and bass entirely, to avoid low-end mush, but there's no need to be hard-and-fast about this and risk throwing the baby out with the bath water. If a little reverb helps to achieve a more satisfactory blend of these instruments with the rest of the track, it's no bad thing, just as long as you keep any undesirable bass overhang in check with your return-channel EQ.
One type of track that I rarely put reverb on, though, is any background synth pad I might be using. These invariably sink into the track without any extra help, and because they already tend to create a homogeneous blanket of sound, even large amounts of reverb will seldom be noticeable — all the reverb does is make the part sound behind the beat! If a synth pad is too far forward in the mix, a more effective recourse is to just shelve off a bit of high end with EQ.
Probably the most important thing to say about using any reverb, though, is that it's not a bad idea to err on the side of using too little in general, particularly if you anticipate using mastering-style dynamics processing on the final mix at a later stage, because this will tend to increase the levels of mix details such as reverb tails. In a lot of cases where an obviously reverberant sound isn't required, it's quite a useful little rule of thumb to set levels so that the reverb only really draws attention to itself if you mute the reverb return — that way you can be pretty sure it's only supporting, rather than overwhelming, the dry tracks.
When you're trying to finesse reverb levels in your track, another handy trick suggested by top engineers such as Geoff Emerick and Alan Parsons is temporarily to mute the most prominent tracks in the mix, typically drums, bass, and/or lead vocal. This lets you hear more clearly whether there are any less-than-ideal reverb balances amongst the background instrumentation.
Delay is a much simpler effect than reverb; most of the time you can pretty much set the delay time (the distance between the echoes) and the feedback level (the number of echoes) and you're off! Perhaps it's because of this simplicity that so many musicians ignore it when it comes to the mix, or maybe it's just that they don't want any kind of distracting 'echo effect'. This is a shame, however, because delay is almost as useful as reverb when it comes to gluing instruments together at mixdown. In fact, in some senses delay is superior to reverb for many dry-sounding modern styles, as it can achieve cohesion without any obvious reverb tail, leaving individual sounds more upfront, distinct, and raw-sounding. It also tends to leave the mix sound much clearer, because it doesn't fill up all the gaps in the stereo field in the way reverb tends to.
Although there are many uses for mono delays, if you're looking to use delay effects for subtle blending tasks, you'll get the most transparent results if you use them in stereo, such that if you send any panned or stereo tracks in your arrangement to the delay channel, the echoes return with the same stereo positioning and spread. This can be fiddly to manage in hardware setups, even if you have a stereo delay unit, as very few mixers have stereo sends, so you may need to carefully juggle the levels of two separate send controls per channel to maintain the correct stereo image in the delay return. In computer systems, things are usually a little easier, although it pays to check that it's all working as you hope. I was using an Apple Logic Pro system very recently, for example, and could find no straightforward way of panning the send from a mono track across the channels of a stereo delay effect — although whether this problem was my fault or the software's is anyone's guess!
Again, you could use only a single delay effect to pull your mix together, but I've found that setting up a couple of contrasting effects actually makes it easier to get results quickly: the first is a short 'slapback' delay, with 50-100ms delay time and zero feedback; and the second is a longer delay with some feedback and a delay time synchronised to the song's tempo. The short delay operates much like the short reverb I've already discussed, although you need to be careful feeding high levels of percussive sounds to it, as these can begin to sound as though they're flamming, interfering with the track's rhythmic pulse. Because the longer delay is tempo synchronised, it tucks itself into the mix in a very transparent way, creating the same kind of warmth and sustain as the longer of my two reverbs, but without adding much in the way of a sense of real space.
Much like the reverbs, delays also benefit from judicious EQ'ing to suit the track in hand. Fortunately, exactly the same principles apply when massaging the EQ settings into shape for either effect, so there's nothing more that needs to be said except that you should give it the time it needs. To illustrate the different flavour that delays offer compared to reverb, I've again set up a similar set of rough mixes of my demo song's chorus, but this time using the two delay effects: they're the ones with the DualDelay file names.
Many professional mix engineers have a selection of standard effects that they set up before they even start to mix, having learnt from experience which units and algorithms have reliably delivered the results they need on project after project. The four specific effects that I've described here are designed to be fairly all-purpose in this way, and can be used in tandem to tie together mixes in a variety of different styles. For example, if your recording is already fairly live-sounding and mostly needs 'gluing together', the shorter reverb and delay can come to the fore, whereas the longer variants can be faded up when more space or sustain are required to enliven something like a heavily-overdubbed pop or electronica record. Alternatively, the delays might take precedence where a track already has the necessary spatial character (or just needs to sound very upfront), but lacks a satisfying resonance and fullness. To show how these four effects can fit together, I've mixed that demo chorus section one more time using all four effects at once, to create the AllEffectsWet file, and you can use AllEffectsDry to get a perspective on how much difference they're making.
There remains one final very important point to make, though: the effects levels on most modern records don't stay static throughout, but actually adapt to suit changes in the arrangement and underscore the long-term ebb and flow of the mix. That means you shouldn't expect the reverb and delay balance in your choruses necessarily to translate directly onto a contrasting verse arrangement. To illustrate this, I've created three longer audio files of the demo song (the Automation files), taking in the first verse as well as the chorus. For AutomationInactive I've just kept the same effect settings I used in the chorus, and you can hear that the effect levels become a bit overbearing in the sparser arrangement. However, just fading down the levels of the two delays and the longer reverb during the verse using mixer automation easily sorts out this problem for the AutomationActive file, with the beneficial side-effect that as the effects levels fade back up for the chorus section it makes the sound a bit more expansive and 'widescreen'. Again, compare these two audio files with the effectless AutomationDry for an idea of what the effects are adding overall.
Even with audio files to demonstrate some of the factors involved, there's a limit to how far you can improve your mixing technique without giving it some practice, so if you've found the application of reverbs and delays to be a bit of a black art so far, try out the four-effect setup I've advocated above on a few of your own tracks. With a little experimentation, you'll find that your ears will begin to attune themselves to the more subtle characteristics of your mix, so that you can balance the effects levels for each track much more successfully.
Many thanks to Sarah Richardson for performing the vocals for this article's demonstration mix. .
While the reverb techniques introduced here are a good place to start for straight-ahead mixing tasks, both natural and artificial reverbs play a much larger role in most professional productions in practice, having as much impact on the recording process as on the mixdown. So if this article has left you hungry for more, stay tuned for next month's in-depth special feature , in which I'll be comparing insider tips and tricks from more than 70 of the world's top producers. Just don't expect them all to agree with each other...