Our ears are the most important tool we have, but they’re surprisingly unreliable. We explain how to interpret what they’re telling you.
Use your ears. That’s the advice drummed into us since day one of our musical lives. But what if your ears aren’t always to be trusted? The human hearing system is the most powerful machine in each and every one of our studios. It’s a microphone with the ability to selectively choose what it listens to, with in‑built compressors and dynamic EQs to protect it from damaging noise, and a system of analogue‑to‑digital conversion that is capable of translating the air pressure from a farting speaker into Bach or Beethoven.
Every sound you hear is delivered by a miracle of nature, but like any piece of technology, the ear has its limitations. Room acoustics, tiredness, headphone EQ and stress are some of the factors conspiring against our hearing. When the brain is not able to understand the signals it is receiving, or is distracted by other sensory information, it can fill in the gaps with auditory hallucinations. For a classic example, just recall the last time you patted yourself on the back for your clever use of a compressor before realising the bypass button was still on.
If you’ve ever eagerly opened a laptop to listen to the fruits of last night’s recording work only to be greeted with a sonic mess, you’ve experienced our hearing’s tendency to fall off the wagon when tired. The wearier our ears become, the more likely we are to allow a mix to mutate out of shape. Rides or hi‑hats creep up in volume way past where they need to be; vocals that appear appropriately loud loiter 5dB under where they need to be in the cold light of day; effects chains containing a traffic jam of plug‑ins render an initially perfectly good sound into a fuzzy mess. While the advice of just using your ears has its merits, the reality is we need to be mindful of our auditory system’s strengths and weaknesses and know when to use the right tools to support it.
From birth we develop a belief that our experience of the world reflects exactly what is happening. We later learn our experience is by no means the full picture but instead the interpretation that our sensory systems create for us. These interpretations are not absolute and are tailored to fit the parameters of what evolution has decided we need to be told. For example, the importance of the human voice has had a profound effect on how our hearing has developed. Our ears prioritise the frequencies where the human voice resides.
To gain a better understanding, I spoke with Grammy Award‑winning mix engineer and producer Susan Rogers, who first rose to prominence as an engineer for Prince in the 1980s and now works as a Professor of Music Production and Engineering at Berklee. “Evolution made sure that we paid extra close attention to sounds that have survival connotations, so the human hearing mechanism evolved to be most sensitive in that region between 1 and 5 kHz, and in particular speech consonants,” she explains. “The difference between ‘bat’, ‘cat’, ‘sat, ‘hat’ and ‘rat’ can have serious implications out there in the world. If you say to someone ‘There are hats in that cave,’ it means something different to ‘There are rats in that cave.’”
While nature decided to focus our hearing on human voices, the sub frequencies of a kick drum were clearly not on its priority list. For proof, just look at the spectral analysis of any song with a strong kick drum, where the amplitude curve bumps highest around the 60Hz region and descends in volume as it approaches the midrange. In dance music, especially, in order to feel the bass of a kick drum (and not just hear its high‑end snap), we have to turn it up much louder than everything else. Even then, we layer it with clicks and high‑frequency material to make it even more audible. We simply can’t hear...