However much studio trickery is considered 'normal' in a genre, the unwanted side-effects of processing can rob your mixes of impact. But it doesn't have to be that way
Most of us are so accustomed to the side-effects of routine processing such as EQ and compression that we take them for granted. Indeed, some people will never have learned to identify them, but most will have experienced the cumulative effect: your carefully crafted mix inexplicably lacks definition and impact. To prevent that, I believe the good old KISS principle must be applied (Keep It Simple, Stupid!).
But by keeping production simple I don't mean being lazy or accepting sub-par results. Rather, I mean taking care to preserve the integrity and definition of the source sounds; making sure your processing doesn't damage them. You should still strive to achieve all the artistic goals of every project, of course, and it's fine to use processing to do that. But if you deliberately choose tools and methods whose side-effects cause the least possible amount of collateral damage, your mixes will sound better.
It's a simple idea in principle, but it requires you to understand and learn to recognise the side-effects of any processing you apply. In this article, I'll explain what these side-effects are, how they can compromise the integrity of the sources in your mix, how you can learn to listen out for and notice such damage, and what alternative recording/mixing tactics you can try to avoid these problems.
Underlying my approach is the idea that a mix should sound 'natural'. But what exactly does that imply? When trying to capture acoustic music, judging the effect of your engineering on the sound quality is relatively easy, as you have a real-world reference; you know what's 'natural' and if the result sounds that way you've done a good job. (For that reason, recording acoustic music is a great way to build your critical listening skills.) But the term can apply to less obvious styles too.
You could argue that in synthesis and electronic music anything goes; that no treatment of the signal sounds 'natural' or 'unnatural'. But anyone making such music knows this isn't true. It's common to render musical events that never took place acoustically, to transform recorded real events into something else, and to place events in 'spaces' that never existed. But we still perceive some such sounds as being more natural than others. The difference is down to how often we've heard similar sounds: mere exposure to sounds changes our frame of reference of what sounds natural. Just as audiences grew used to the sound of the overdriven electric guitar and new genres emerged in the '50s and '60s, so too the average clubber or rave-goer knows what an 808 kick or a Moog bassline should sound like. Not everyone can picture these sound sources in the way they can when, say, a hammer strikes a glass object, but they still attach a mental image to the sound. They recognise it as part of a musical style, and if it matches their expectation it sounds natural. This article covers all sound sources, whether their origin is acoustical, electrical or a combination of the two, though they do have to be embedded in the culture of your intended audience.
Note that there's no single ideal reference. You don't make every cello sound the same, because your reference of what a cello is has some variables. But a cello sounds like a cello, and if your rendering of it evokes the right image for the listener, you've done a good job. The same goes for techno kicks or stacked walls of organ-like synths: if the audience recognises what you intend, the production works.
We can widen our definition of 'natural' to encompass sound quality, for which it means something like: unprocessed, intact, or pure. Or better, faithful to the original. The listener's frame of reference makes it possible to think about being as faithful as possible to this reference. Whatever processing is used to arrive at the final sound, the threshold between what we do and don't accept as natural is crossed when it becomes unclear what mental image should be connected to a sound (is this a cello?). Or, worse, when the sound falls apart, and stops sounding like a single source.
This is neatly illustrated by a specific problem I'm noticing increasingly in heavily processed pop productions: despite vocals being mixed loud and proud, it's often harder to understand the lyrics than in productions that rely less on pitch processing and multiband compression. Going purely by the mix balance, I'd expect crystal-clear intelligibility, but the opposite is true.
I believe that the core of the problem is that the rendering isn't faithful enough to the original to evoke the right mental images. The producer's aim would obviously not have been to make the lyrics unintelligible, but rather to make the vocal tight in pitch and bright in timbre, and generally to lend it a more slick, polished appearance. The unintelligibility problem is an unintended side-effect of the processing used to accomplish those goals.
To understand what's going wrong, you must consider how we hear sounds in a mix. When listening to a vocal (in any language), you're trying to recognise patterns that a single vocal would typically produce and match them to your reference library of mental images. To do so, your brain attempts to separate all the spectral components the vocal produces from any surrounding sounds, and connect them into a single source image. The 'stream' of sounds this source produces is then screened for familiar language patterns. The process is impeded if you can't easily connect the different frequency components the vocal produces into a single stream.
As some processing can act to 'break up' sources (eg. it could lead you to perceive the highs and lows of a vocal as two separate entities), it can make it hard for the listener's brain to do its job. If you heard the voice in isolation, it would be easy enough to understand the lyrics — it could probably sound more natural, but you can understand what's being sung. But when you add in the accompanying instruments, your brain has to unravel which streams of information belong together and which don't. Imagine you were to divide a sentence between two people, each speaking alternate syllables — how much harder would it be to understand the sentence than when a single person pronounced the complete pattern? In short, your brain must work harder to...