I recently watched a YouTube video in which it was recommended that you match the input and output level of every plug-in in a mix session, so you can compare the sound before and after processing, without the loudness difference skewing your decision. I’ve seen a few plug-ins that do this for you automatically, which seems a great feature, too. Is this as good an idea as it seems?
SOS Reviews Editor Matt Houghton replies: It’s true that perceived loudness affects our perception of what sounds good. So on the face of it, you’d think you should level match every plug-in in this way. But in reality, such level-compensation can often work against you when mixing.
When mastering or when applying master-bus processing during a mix, it is almost always a good thing. Because if you match the perceived loudness of the signals before and after processing, you can be sure that your judgement of what works and what doesn’t is not skewed by any differences in overall level resulting from your processing. Similarly, if you’re working on the tone of an instrument while tracking/composing, then there’s often a strong argument in favour of level-matching.
In both of those scenarios, though, you’re working on one source in isolation. When mixing, on the other hand, any changes you make to the level of one source/channel impact on various other elements in the mix. And note that it’s not just overall level that’s important — boosts and cuts can be performed at different frequencies, whether obviously via EQ or harmonic enhancers, or more subtly by dynamics processing clamping down harder on the louder, lower frequencies than others. And that leads to a dichotomy: a boost at one frequency adds level but if you bring down the overall level to compensate for that boost you’re attenuating other frequencies — and that will affect the overall mix balance in some way.
It’s for this reason that I find that attempting to level-compensate when EQ’ing individual tracks while mixing is almost always counter-productive. For example, say I’m boosting somewhere around 2-2.5 kHz on a snare drum, specifically to add some bite to the drum and help it poke through the mix a little better. That move has to be judged on its relationship with every other sound. If I reduce the overall level of the snare to compensate for the frequency-specific level boost, then I’m also changing the interaction of the snare with other mix elements lower down and higher up in the spectrum. It might change, for example, how the kick and snare drums sit with each other. It might also impact on the way any cymbal bleed in the snare mic interacts with the cymbals captured by the overhead mics.
Similarly, if I’m EQ’ing to remove some low mids from the electric guitars in order to create more space for other instruments in the mix, then I don’t necessarily want to be pushing up the guitar level to compensate, because, firstly, that undermines what I’m trying to achieve (along with everything else in the guitar part, I’m raising the low mids that I just lowered!) and, secondly, it would change the balance of the guitars’ high-mids with the high-mid frequencies of, say, the snare, the vocals or various synth/keyboard parts. I’m much more interested in how the overall mix sounds as a result of the change than in making soloed comparisons of the guitars before and after processing.