In your interview with Mike Shipley back in SOS July 2011 [https://sosm.ag/mike-shipley-alison-krauss-0711] he said this about the producer Mutt Lange: “With Mutt, we always have programmable equalisers where we can EQ every word... every consonant of every word if we want — literally, every part of every word.” I realise that you can only speculate, but I’m wondering what they were trying to achieve here? What do you think they were trying to fix?
Anon via email
Mixing expert Mike Senior replies:Probably the best way to think about this is in terms of some of the things you’re typically looking to achieve with detailed fader automation, because you can consider EQ as just another fader that happens to affect only a specific frequency range. The best way I can explain is to offer examples for several automation goals in turn:
1. Fixing any unsolved problems or undesirable side‑effects of your dynamics processing. A heavily driven vocal compressor might have dulled some of the singer’s ‘K’ and ‘T’ sounds, and you could use an automated high‑shelf EQ to restore their clarity. Or perhaps low frequencies from wind blasts on the mic are causing the compressor to duck some syllables unduly, and an automated high‑pass filter could remedy that.
2. Maintaining the most appropriate subjective vocal level throughout the timeline. Maybe the singer turned away from the mic briefly while singing, so a few syllables lost some ‘air’; or they moved closer to the mic for some syllables such that the proximity effect over‑inflated the low midrange. Both these things can make it difficult to achieve a consistent subjective vocal level with simple fader rides, so this would be another situation where EQ automation could help. The same goes for sporadic notes with piercing resonances — an automated EQ dip can help here in a way fader moves can’t.
3. Maximising the intelligibility of the lyrics. You might brighten ‘N’ and ‘M’ consonants (which are naturally dull‑sounding) to make them more audible, or adjust the tone of sibilants to enable them to sound full without harshness. Or maybe you need to boost the vocal’s high end to maintain vowel intelligibility when the drummer starts riding his crash cymbal.
You might use EQ to even that out a bit so that your vocal will retain the desired balance better, regardless of the playback system.
4. Mitigating any issues with mix translation between different playback systems. If some vocal syllables have more ‘air’ than others, say, then they’ll seem more forward on full‑range playback systems than on bandwidth‑limited ones, so you might use EQ to even that out a bit so that your vocal will retain the desired balance better, regardless of the playback system.
5. Drawing the listener’s attention to the most musically/emotionally interesting aspects of the performance. You might use an EQ shelf to make more intimate moments breathier or fuller, say, or emphasise the vocal’s presence region so that the song’s hook still cuts through, even once the electric guitars come in during your song’s later choruses.
There is, however, the question of diminishing returns, and it’s clear to me that Mutt Lange goes much, much further with this kind of automation than almost anyone else. I particularly liked Rich Costey’s perspective on this in SOS October 2015: “What was staggering, to everyone, was the degree and detail of his editing. I’d never seen anything like it in my life.”
Mutt's Working Methods
But if you’d like to know more about Lange’s working methods, do also check out our interviews with Bob Clearmountain in SOS June 1999 and July 2006, producer Tony Platt in April 2001, and Bob Bullock on recording Shania Twain in SOS August 2004.