The only universally applicable rule of mixing is that stimulating an emotional response in the listener trumps all other aims. While it’s obviously a great idea to seek tips, tricks and guidance when you’re learning how to mix, and to consider how you might apply that advice, you also have to be careful that in the process of applying it you don’t lose sight of the big picture. With that thought in mind, here, in no particular order, are 12 things that I reckon many people often worry about a bit too much when they’re mixing.
We all now have access to pitch‑correction software like Auto‑Tune and Melodyne and can, if we wish, make a recorded vocal sound pitch‑perfect. But great vocal performances have imperfections, and if you focus too hard on ironing out every one you’ll often be left with a part that lacks soul and emotion. What’s more, as our ears are particularly well tuned to vocals we notice processing artefacts in them much more than in other sounds.
Judging by the reader demos I’ve heard over the years, many people would be better off paying far more attention to the tuning of instruments, and allow the main event a bit more wiggle room!
It’s a similar story with timing correction. It can be a drag on two fronts: if you spend ages quantising or minutely nudging individual notes to a grid or groove template, only to discover that you’ve sucked away any sense of groove or human feel, you’ll have wasted time and lost perspective.
If that sounds like you, try backing off a little: fix what needs fixing, and ignore what doesn’t.
Yes, every mix must work when summed to mono and you can glean very useful information about your mix by checking what it sounds like during mono playback. But in mono your mix needn’t sound perfect and it will never sound as impressive as a good stereo mix; it just needs to work.
If it sounds great in stereo, and in mono you can still hear the lyrics and the vocal attitude, feel the groove and pick up on any important melodies, that’s arguably good enough.
From hiss and buzz to breaths and sibilance, fret noise and finger squeaks to traffic noise and barking dogs, it’s amazing what can grab your ear’s attention when you’re mixing. And sometimes that razor‑like focus on small details can be an asset. But do you really need to spend time obsessing about the removal of such noises? If you can only hear it in solo, forget about it.
Keep that focus on the big picture, and you’d be surprised what you can get away with on individual tracks. And even when everything else is playing, consider whether it’s really a problem, or if you really need to be distracted by fixing it now.
If you’re given a busy project to mix, it can be tempting to think that you have to use each and every part that you’re given: that you have to include all the mic signals, squeeze in every overdub and double, and accommodate each musical idea.
While you must obviously respect the artist’s creative vision and you’d hardly choose to nix the lead vocal, you don’t have to use a part just because it is sitting there in your project. In fact, it’s common for artists to lay down more ideas than a song needs, and if you feel a part doesn’t serve the song, the vocal or guitar doesn’t need to sound double‑tracked, or that extra kick mic channel isn’t adding anything, you should feel free to can it.
There’s no golden rule that says every part you include in the mix must be heard all of the time — so if you’ve been fighting to achieve that aim, ask if you really need to spend your time and energy on the ‘problem’.
Some of my favourite tracks have sounds that weave in and out of each other, sometimes individually identifiable, and sometimes blending to create a completely different sonic texture. In dense mixes, you can often simply use automation to draw the ear’s attention to a part when it comes in, sneakily bring it down again to make space for the next detail, and nudge it up later to remind the listener it’s there. You might even find that a lot of parts can be edited very assertively too: does that pad or guitar really need to play right on through every verse and chorus?
You don’t have to use a part just because it is sitting there in your project.
This is the phenomenon whereby when two sounds have content in the same frequency range and play at the same time, the louder sound obscures or ‘masks’ those parts of the other. The effect is real, and it will sometimes require your attention. But your ears should tell you when that’s the case — you shouldn’t need to go hunting in your meters to find such problems.
Happily, the most important sounds are usually mixed louder, making many such overlaps less problematic. Where it is problematic, a simple bit of fader riding can often provide a swift solution.
Level meters, stereo correlation meters, frequency analysers... They all have a place, and can be great learning aids. But don’t get too hung up on what you see: the bottom line is that what matters is how things sound, not what they look like. If you can’t hear a resonance there’s no need to pull it down. If something’s out of phase, that only matters if the mix sounds wrong on mono playback. You don’t need to think about loudness while you’re mixing, and if your DAW has peak meters on every channel you can pretty much ignore them as long as you’ve left a little headroom when setting levels.
So yes, learn to interpret what your meters are telling you and feel free to use them when the need arises — but don’t let them slow you down or hold you back.
Maybe the drummer does, but if it sounds good few others will. In fact, most of the time I find that keeping an acoustic drum kit very narrow in the mix works better anyway — the contrast tends to make guitars and synths sound wider!
A tip I’ve often seen doing the rounds is to sweep EQ boosts around in search of undesirable resonances and, when you find one, turn that boost into a dip. While this can work, to an extent, it can also waste a lot of time, as you end up attending to problems you can’t hear. Worse still, you’ll often find that you’re not really focusing on the big picture while you perform the sweep.
If you really need to dip a resonance you can’t pinpoint by ear, a good frequency analyser will probably be a much quicker, more accurate way of identifying it.
I see lots of pictures of home studios with several sets of relatively inexpensive monitor speakers, and often different headphones and various ‘grot boxes’ too. The mind boggles: imagine the quality of a listening system that same money might have bought! A secondary monitoring system can certainly be helpful — but only if you know what it’s telling you.
You need one good system you can trust, and anything else should tell you something specific. For example, can I still hear the kick bass on a small speaker? Or does the mix still work in true mono? There’s no point checking a mix on a million different systems and trying to get it to sound perfect on all of them; you’ll end up doubting your ears, chasing your tail and losing perspective. The same goes for reference tracks: they should tell you something specific, not be a target to aim for and obsess over. If your mix sounds great and you’re tapping your feet, who cares how closely it matches whichever big‑name artist’s track you had in mind?
If you ask people for an opinion on your mix, they’ll feel obliged to come up with something to say. That means they won’t listen to it in the same way they’d normally listen to music, and their feedback might not reflect their instinctive reaction to the music. Ask five people for opinions and you risk getting five sets of contradictory feedback.
Consider feedback, by all means, but make your own judgments: you can’t mix by committee! Ultimately, if you’re happy and the artist is happy, that’s what matters.