How can you tell that a particular recorded vocal or bass track includes resonances? Specifically, I’d like to know how to identify where the resonances are visually. I’ve read articles where the engineer said that he made a notch at 127Hz to get rid of some problem, but how did they identify that very specific frequency?
Mike Senior replies: Well, firstly, it bears repeating the truism that it’s your ears, not your eyes, that need to be the driving force behind any processing decisions you make at mixdown. It’s not that visual audio-analysis tools can’t help, but they’re only really useful once you’ve decided what it is you’re looking for. In this specific situation, that means learning what recordings with undesirable resonances sound like, and only firing up the analysis tools when you’re trying to work out exactly what and where the problem is. There are some examples of such files on my web site at www.cambridge-mt.com/ms-ch11.htm#audio, and we’ve also dealt with plenty of recordings containing unwanted resonances in Mix Rescue, so it might also help you to work your way through some of the audio demonstrations that accompany those articles too.
Let’s look at some common case-studies, though. With bass instruments, room resonances can manifest themselves as an unevenness in the musical line, because certain note fundamentals may be reinforced by the resonance modes much more than others. If you hear some notes booming out too strongly, then investigating how the instrument’s spectrum looks on a high-resolution spectrum analyser can help you identify which note fundamentals are louder than the others, and counteract the effect of the room resonances with surgical EQ cuts. With mid-range instruments, the effects of room resonances tend to be less simple, and are more likely to manifest themselves as a boxy ‘small room’ timbral signature. In my experience visual analysis won’t usually help a great deal here, and you have to adopt more of a ‘hunt and peck’ approach, sweeping a narrow EQ boost around the spectrum and then placing an EQ cut wherever the boost sounds most unappealing.
A lot of recorded resonance problems aren’t a result of room acoustics, though. Most instruments have their own inherent resonant characteristics as well, but the same kind of approaches still work if those cause mixdown problems. So, for instance, the air-cavity resonance of an acoustic guitar’s body can often overemphasise the fundamental frequencies of a handful of the instrument’s lowest notes, so if you hear those notes poking out too much, then by all means reach for the spectrum analyser to home in on the exact frequencies that are problematic. Where it’s an unwanted tonal character you’re getting from a more complex pattern of resonances, then trial-and-error EQ cuts will likely be more useful.
With project-studio snare drums, it’s not uncommon for there to be unwanted pitched resonances that clash with a song’s harmonies, and these are usually easy to spot on a spectrum analyser — the display peak associated with the resonance will sustain far longer than the more transient noisy elements of the instrument’s sound. However, you may find there are several such peaks to choose from, so check out the sound of each one using the trial-and-error approach to find the ones that you like least, bearing in mind that a resonant pitch may well result from a series of spectral peaks at multiples of the pitch’s fundamental frequency.
The trickiest resonance issue to resolve is with vocals, where the combination of certain pitch registers and vowel sounds can sometimes give rise to sporadic resonant peaks in the 1-4 kHz range. The clue that this is happening is if certain notes manage to sound harsh even when you cut too much general high-end with EQ. When I suspect this, I’ll usually have a close look at how any particularly harsh-sounding syllable looks on my spectrum analyser, and see if there’s any spectral peak that seems to coincide with what I’m hearing. If I do, I then put in an EQ notch at that frequency to find out if that helps the harshness. Sometimes it takes a couple of tries to find the correct frequency — it’s not always the one that looks most prominent on the display — but as long as you let your ears take the lead you shouldn’t go far wrong even in such a specialist processing situation as this.
Reviews Editor Matt Houghton adds: Mike has already offered some great advice above and I’d urge you to heed it! But one thing that’s worth pointing out, since you ask specifically about this, is that when you read interviews in which ‘name’ producers reel off very specific frequencies, Q values or other equipment/plug-in settings, they’ll often be responding to an interview via email or phone. Why is that important? Well, unless the figure is a very rounded one (in which case it might be a band’s fixed frequency on a console or ‘go-to’ hardware EQ) their reply will probably have been informed by having their DAW session open in front of them, and without that information there at their fingertips, they’d most likely have referred to a more ballpark figure or range of frequencies. I’m not saying you can’t develop a good ear for frequencies — of course you can — but telling 127Hz from 125, and then committing each and every decision from every project to long-term memory, is beyond even the best of us!
Finally, if you’re having a hard time pinpointing the resonances, you can always try cheating! A recent update of Tokyo Dawn Records’ Slick EQ GE (shown above) includes a neat learn facility, whereby it ‘listens’ to the audio and is then able to ‘remove’ resonances. You’ll want to refine the results by ear, as no algorithm can ‘know’ how you want things to sound, but in my experience it does a pretty good job of homing in on the offending frequencies. (I gather they’re planning to incorporate this facility in their Nova dynamic EQ too, which is likely to be a better tool for the job.) If nothing else, you could use such a facility as guidance while you train your own ears.