The flexibility of the effects assignment in modern MIDI + Audio sequencers (DAWs) can be daunting, so here's some advice on how to sort out the routing and order of your plug‑ins for common recording and mixing tasks.
When home recording first took off in a big way, most of us started out with a four‑track recorder, a simple mixer and a spring reverb, and then we started saving up for a compressor. Once we got the compressor we had plenty of time to read the manual and to learn everything about it before we could afford the next piece of kit, which was probably a gate or a delay unit. As each piece of kit was added, its possibilities were fully explored, so our studios held very few mysteries for us.
Today's virtual studio is very different. One day you may be a guitarist with just a little experience using a friend's Portastudio, and the next day you take delivery of a computer loaded up with Logic Audio or Cubase VST or one of the other cutting‑edge sequencing packages, complete with fully automated MIDI and audio mixing environment and a whole folder full of VST plug‑ins, the hardware equivalents of which you may never have even seen before. The word 'daunting' springs readily to mind.
If you find yourself in this situation, you could do a lot worse than visit the SOS back issues on our web site and check out the numerous articles that explain the functions and applications of the various effects and signal processors, especially EQ, compression and reverb, as these form the cornerstone of signal processing. After that, try the different plug‑ins one at a time and try to get to know what their key controls do. Most are relatively straightforward if you read up on the basic principles first and, fortunately, most plug‑ins have a limited number of controls, whereas hardware multi‑effects boxes often have an overwhelming number of adjustable parameters, especially some of the more sophisticated reverb units.
Even once you've got that far, however, it may not be obvious whereabouts in the signal chain to connect these plug‑ins, and what order to put them in when you want them to process the same signal. That's the focus of this article, so if you've found yourself asking these questions, the answers will shortly be revealed.
Effect Or Processor?
First comes the old 'effect or processor?' chestnut, which is so fundamental that I make no excuses for revisiting it here. Both hardware and virtual mixers allow you to connect plug‑ins via insert points in channels (and sometimes in groups/busses) and via one or more aux send/return loops. The send/return loop allows a single effect to be shared amongst as many mixer channels as you like, with a control in each channel determining the amount of effect to be added. Reverb is the most commonly used send/return effect, and the send/return configuration allows you to add more reverb to some tracks than others. The reason we need to differentiate between effects and processors is that, while either can be used in an insert point, only effects should normally be used in a send/return loop.
In general, 'effects' are delay‑based, and encompass reverb, delay, echo, and pitch‑shifting, as well as modulation treatments such as phasing, flanging, chorus and vibrato. (Though pitch‑shifting may not seem to be delay‑based, it actually works by chopping up the audio into tiny slices, delaying them by a small amount, changing their playback rate and then splicing them back together.) Effects processors almost always feature a mix control to balance the 'dry' (unprocessed) and 'wet' (effected) signals, so, in the case of reverb, you can adjust the level of the reverb added to the original sound.
When using an effect in a send/return loop, however, the direct channel path for the dry sound is through the mixer channel, so the effect should be set with its mix control at 100 percent wet (effect only) so that only the effected sound is added when the channel (post‑fade) send control is adjusted. If, on the other hand, an effect is used in a channel, group or master insert point, the wet/dry balance is set up using the mix control on the plug‑in itself.
Processors have no mix control, because no dry signal is used — the output is entirely processed. The most common processor is EQ, but gates, compressors, panners and resonant filters are also processors. If there's no delay element and no mix control, it's pretty certain that you've got hold of a processor. Because it is not desirable to add the processed sound to the dry sound, processors are only used in insert points. If the dry sound were to be added, it would at best reduce the intensity of the process and, in the case of a digital mixer, it could result in a static flange‑like effect, because of the tiny time differences between the dry and effected signals.
This will be familiar territory to many of our regular readers, but it is a vital point to get across, and while there are some workarounds that contravene these basic rules, you should endeavour to stick to them until you gain enough experience to understand the implications of breaking with convention.
"Order! Order!"
Often we want to use several plug‑ins in the same signal path, but it pays to think about the order in which these will be used. For example, if you want to combine a gate (to clean up pauses in a signal) with reverb, it's pretty obvious that the gate should come before the reverb. This way any small discontinuities caused by the gate clipping the ends of wanted sounds will tend to be masked by the sustaining effect of the reverb. If you were to reverse the order of the plug‑ins, your unwanted noises would be stretched out by the reverb and, unless the reverb was very short, the reverb tail might actually fill all the pauses that you originally intended to gate. Now you have the dilemma of adjusting the gate to allow the reverb tail through without chopping it off short (in which case the chances are your unwanted noises will also get through) or adjusting it to kill the noise and suffering your reverb tails getting chopped off.
Putting the gate first clearly gives you a much easier ride and produces the end result you were looking for without affecting the reverb tails in any way at all. As a rule, you'd only put a gate after reverb if you wanted to create a deliberate gated reverb effect, but, as most reverb plug‑ins can emulate gated reverb, it's probably better to take the easy way out and let the reverb plug‑in do all the hard work.
How about combining a gate and a compressor? On the face of it they should work either way around, and to an extent they do, but when you think about what's happening, it soon becomes obvious that one way is better than the other. When you are setting the threshold on a gate, the ideal situation is one in which the quiet sections to be gated are much quieter than the loud sections that will open the gate. The job of a compressor is to reduce the level difference between the loudest and quietest sounds, so if you were to compress before gating, it would make setting the gate threshold more difficult, because it would reduce the contrast between the loud and quiet sounds.
So far, then, we've established that gates come before compressors and reverbs, but which way around do you connect reverbs and compressors? The answer is that you can connect them either way, but the result will be subtly different. If you put the compressor before the reverb, you'll get the most natural result, as the dry sound will be reduced in dynamic range before reverb is added, but if you put the compressor after the reverb, you'll compress the reverb tail itself, which will have the effect of trying to pull up the reverb level as the reverb decays. This actually alters the shape of the reverb decay curve, and whether that is a good thing or not is a purely artistic decision.
Where Should You Insert The Equaliser?
What happens when we bring EQ into the equation? Well, you might think it would help to put EQ before a gate, because, if there's a lot of high‑frequency boost that's adding noise, the gate will take care of it. However, if the EQ plug‑in is properly designed it won't produce noise when the input is silent anyway, so there isn't a great deal of difference in noise performance whichever way around you connect them. However, some EQ settings may emphasise the difference between loud and quiet sounds, whereas some may tend to reduce them, so use this as the decisive factor instead. Which way around makes the gate threshold easier to adjust? For example, if you have a signal where the pauses are full of high frequency hiss and you want to EQ some top out of the signal anyway, doing this prior to gating will reduce the level of the hiss making it easier to set the gate threshold.
This seems like a clear choice, but things become a little more confusing when compression is one of the effects in the chain, because putting EQ before compression produces a rather different result to putting EQ after compression. To illustrate this point, imagine a signal that's been EQ'd to add a lot of bass boost so that the kick drum part now seems a lot louder than it originally did. If you follow this with a compressor, the compressor will dutifully apply more gain reduction to the louder parts, in this case the kick drum, and it will tend to level out the sound, thus undoing some of the work of the EQ. Bright sounds occurring at the same time as the kick drum will also be pushed down more in level, so the actual outcome is more complex than it might at first appear. In certain circumstances, putting EQ before compression can produce musically interesting and useful results that are quite different to compressing before you equalise. As you might imagine, if you compress before equalising, the EQ will act on the compressed signal and EQ it normally without affecting the way the compressor works, so the effect of the EQ is likely to be clearer — in most cases, I find this way round produces the most musically useful result.
If I were to combine gating, EQ, compression and reverb (or delay), my preferred order would tend to be gate, compression, EQ and then reverb/delay, though I might try swapping the EQ and compression just to see which gave the best result.
If distortion is one of the effects, then you really need to think about what you want to achieve. Distortion dramatically reduces the dynamic range of a sound, so gate it first if gating is necessary. Compressing before distortion will increase the average level of the signal, so decaying sounds will tend to distort for longer with pre‑compression, whereas compressing afterwards may do very little, as the signal is already quite heavily squashed. In fact, compression following distortion may only have an audible effect at the starts and ends of notes when any noise will tend to be further exaggerated.
Distortion adds lots of new harmonics, so EQ and filtering will have more effect if placed after the distortion than before it. The same is true of flanging, which produces a strong comb‑filtering effect that can be very dramatic on heavily distorted signals. By all means try putting the distortion after the EQ/filtering to see what effect you get, but don't expect it to be as spectacular. The classic exception to this rule is guitar wah‑wah which is traditionally used before the guitar amplifier and hence before any distortion.
Heavy distortion produces a lot of high‑frequency harmonics that can actually be quite unpleasant, so following it by some high‑cut EQ or even a dedicated speaker simulator plug‑in can help produce a smoother result.
Breaking The Rules
As you can see, there's no absolutely rigid order in which to connect things, as the creative person will always find a way to combine plug‑ins in a seemingly illogical order and yet still come up with a brilliant new effect. However, in everyday applications gating comes first and delay‑based effects tend to come last. EQ usually follows compression, but not always, so try both options and see which of them works best for you in your particular situation.
When combining delay‑related effects, there's plenty of room for experimentation, as most combinations produce musically interesting results, albeit different ones. Take flanger and reverb for example. If you put the flanger before the reverb, the myriad delays created by the reverb will tend to smear the flanger effect lending the reverb a strong shimmer rather than an overpowering 'whoosh', and the same is true of chorus if you want to create something more subtle. On the other hand, putting the modulation effect last will process the reverb output in a much more predictable way as the cyclic nature of the modulation effect won't be diluted by the complexity of the reverb.
The more familiar you become with your plug‑ins, the more intuitive connecting them together will be. While it's worth learning the basic rules, you should also take time to experiment, so that you can see what can be achieved by breaking them!