I’m seeing a lot of conflicting advice about using compression while recording. Some seem to swear by it while others say it’s completely unnecessary with 24‑bit recording and modern converters. Where do you stand on this?
Billy Whiteman
SOS Reviews Editor Matt Houghton: There are a few considerations here, but let’s start with your point about 24‑bit recording. Yes, it’s true that this allows you to capture a signal with a huge dynamic range, so you can keep the noise floor plenty low enough without risk of overloading your interface/A‑D converters — ie. it’s easy to set a suitable input level, since you can afford to leave plenty of headroom. In fact, some interfaces and portable recorders now offer 32‑bit recording, which almost makes setting levels unnecessary! Purely in terms of accommodating dynamic range with minimal noise and distortion, then, you clearly don’t need to compress using hardware ‘on the way in’.
It’s also true that you’ll have more control over the compressor if you apply it to an already recorded signal, because when mixing in the DAW you can refine its settings to the nth degree and, should you need to, even automate the threshold to fine‑tune its response. It also means that if you need to use a corrective noise‑reduction process which uses a ‘noise fingerprint’ then that noise will be steady in level and therefore easier for an algorithm to work with — the change in levels caused by compression will apply to the noise as well as the signal and mean noise‑reduction processors have a harder time solving the problem acceptably without audible side‑effects.
Still, I do personally tend to use a hardware compressor on many sources while I’m recording. Indeed, I use one almost every time with vocals, bass, guitar, drums... you name it. Why? Well, for a few reasons really. An obvious one is that I’ve been recording for years and am very used to working this way — if it ain’t broke don’t fix it! But there’s more to it than that.
I believe in the old adage of ‘get the sound right at source’ and, assuming I’m recording with a certain sort of mix in mind, I see EQ and dynamics as being very much a part of that.
I believe in the old adage of ‘get the sound right at source’ and, assuming I’m recording with a certain sort of mix in mind, I see EQ and dynamics as being very much a part of that. After all, how do I know I really want to commit to that drum sound if it doesn’t yet sound the way I want it? Just as I’ll happily suggest a performer adapts their playing technique and I’ll try different instruments, mics or mic positions to get the right sound down, I’ll also consider whether compression, EQ, saturation or transient‑shaping might help me get things nearer where I want them to be. That goes both for shaping the signal in terms of frequency and dynamic range, and for any tonal contribution the processors might make — ‘saturation’, if you will.
You can do the same with plug‑ins at the recording stage, of course, and if you monitor through the DAW then you can nudge things in the direction you want without committing to the settings. If you’re inexperienced, that might appeal for a couple of reasons. One is that you don’t risk compromising a perfectly decent recording, and the other is that you won’t be distracted by the need to get your processing decisions spot on — you can, rightly, keep your focus on your primary aim of assessing the performance you’re capturing.
However, routing the signal into and out of your DAW inherently adds latency, whether the plug‑ins themselves do or do not, so before you commit to using plug‑ins in this way do consider whether that will present any problems. For instance, will you want to feed the processed signal back to the artist’s headphones while recording? A handful of interfaces can host near‑zero‑latency plug‑ins, though your choice of both interface and plug‑ins becomes more limited if you go down that road. When recording through analogue hardware, on the other hand, you can use whatever interface you want, and as long as the interface has near‑zero‑latency input monitoring, or you use a hardware mixer, there’s no problem with latency.
As a final note, a possible downside of analogue hardware is that the sound is ‘baked in’; so you really must be confident that you’re not overcooking things. That comes with experience, of course. But you can err on the side of caution these days, with less aggressive compression while recording, before finishing the job at mixdown. Why bother then? Well, I still find that plug‑ins tend to do a better job if I ask them to do less, and that I spend less time setting them up at mixdown if I’ve already done half the job at the tracking stage. It’s great to be able to pull up the faders and just focus on making the mix work — I can mix quicker, and I reckon that means I can mix better too!