I’ve been reading about iZotope’s Neutron, and how it is described as automating the process of attenuating some frequencies of audio track B, so that the simultaneously played audio track A, which has signal in the same frequency range, sounds more prominent. I can’t get my head around the time-variant nature of the frequencies of track A. For example, if track A is a voice singing one note only and track B a piano holding one chord only, I understand that attenuating the piano’s frequency at the voice’s fundamental, and some of its harmonics, should help make the voice stand out more. But, as the voice is singing a sequence of notes, with the resulting harmonics changing over time as well... how does one ‘keep up’ with attenuating the piano’s frequencies over time?
For example, the EQ attenuation on the piano when the voice is singing at 100Hz may not be the ideal one for when the voice is singing a 400Hz note (perhaps... I’m only guessing!). Does the competent engineer ‘ride’ the frequency attenuations of the piano’s EQ to follow the voice? Or is it generally ‘set it and forget it’ in terms of the EQ on the piano, and that this level of detail just isn’t part of what’s done?
SOS Forum post
SOS Technical Editor Hugh Robjohns replies: It’s true that when two sounds occur simultaneously with a similar frequency spectrum, the louder one tends to ‘mask’ the quieter one, and this can cause difficulties in terms of retaining the clarity and focus on (say) a lead vocal. There are several ways to resolve the issue, and the optimum results are usually achieved by implementing a combination of them, but the best first step is almost always to minimise the problem at source, by fine-tuning the orchestration and arrangement of individual parts to avoid masking clashes in the first place! That typically involves a careful choice of the tonality and voicing of all the contributing instruments, as well as careful thought of the rhythm of some instruments, and/or the chord inversions employed — all specifically to create both temporal and frequency ‘gaps’ so the dominant signal can shine through. For example, two rhythm guitar parts playing every quarter-note with heavy fuzz distortion aren’t going to leave any space, temporally or frequency-wise, for anything else to shine through!
Another common technique is to reduce the dynamic range of the dominant signal (and/or the sub-dominant ones), so the most important signal is always the strongest signal — and is thus the one that tends to mask the others. For example, the vocal track is usually relatively heavily compressed to ensure it sits above the backing, and it’s now commonplace to edit and tweak the levels of individual syllables to extend that dynamic range control to extremes that were impossible a couple of decades ago.
Where these techniques are inadequate on their own, EQ can be used to ‘de-emphasise’ frequency ranges in the backing instruments where clashes occur. In many cases, conventional EQ might do the trick, but in others a dynamic EQ (whereby the amount of EQ applied varies according to the level of the track itself — or to the level of other tracks feeding its external side-chain input) might prove more effective. I believe the iZotope Neutron plug-in you mentioned has a dynamic EQ mode.
But, going back to your main question, it’s important to realise that conventional EQ affects quite a broad range of frequencies (often an octave or more) so it’s rarely necessary to continually shift the centre frequency of an EQ cut to track the musical fundamentals. It’s possible to do so, particularly with dedicated pitch-tracking EQs such as Sound Radix’s Surfer EQ2, but generally they’re more effective at broad-brush tone-shaping jobs; while it’s possible to create narrow EQ bands that ‘track’ the fundamentals or selected harmonics of a potential masking instrument, this risks results that sound a bit like a wah-wah or phaser! Instead, one or two well-placed, relatively gentle broadband EQ dips are usually sufficient.
Of course, although a fixed EQ simultaneously improves the intelligibility of the lead vocal (or whatever) while decreasing the presence of the backing instrument, it’s also likely to reduce the dynamic impact of the backing instrument — especially in cases where the lead and backing components interact in a strong rhythmic way. In such cases, some form of dynamic EQ might allow the backing instrument to retain its full tonal balance when the lead vocal is absent. This is really where modern plug-ins and sophisticated DAW automation come to the fore in enabling the ‘unmasking’ equalisation to be controlled moment by moment far more precisely and accurately than ever before. This is one of the strengths of plug-ins like iZotope’s Neutron and Wavesfactory Track Spacer 2.