With spill between instruments and bleed from backline and monitors, live‑performance recordings present unique mixing challenges. At London's PLASA show, Miloco Studios' engineer Jamie McEvoy explained how to get a great result.
Most people involved in live music at almost any level can now access the means of capturing multitrack recordings of their performances easily and affordably. Even if the venue's FOH desk can't record multitracks directly via USB (as several now can), a simple rig comprising a laptop and a multi–input interface can do the job perfectly well. Alternatively, stand-alone 16- and 24-track recorders are not the expensive luxuries or cumbersome, bulky devices they once were.
But the challenge doesn't end with the equipment. In a studio, you can position the players to reduce bleed or spill, whereas on stage they'll be wherever the venue and the performance dictates they should be — often much closer together than ideal for recording — and some of them (not least the lead singer) might need to move around during their act, which means the amount and nature of spill can vary. And, of course, in the studio you wouldn't have a really loud mix of the whole performance playing right next to your recording location, compounded by a number of other speakers, all with different mixes, arrayed around the performers!
Time is a factor too. In the studio, you can spend time repositioning or swapping mics and optimising levels, whereas in many live venues you'll be lucky to even get a cursory line-check for the recording feeds to be sure they've all got a signal. Obviously, you don't get the chance to interrupt a take and adjust things either and you'll probably be battling with the environmental acoustics, as the PA pumps low subwoofer power into a room with no bass trapping, and maybe no high-frequency absorption.
Finally, as we'll explore below, you won't have the final say (perhaps not even any say!) on what mics will be used or where they're positioned. The front-of-house sound must inevitably take priority — after all, the live sound side of things generally has to work well for there to be a performance worth recording.
But, while all the sonic compromises these issues entail can present a genuine challenge to anyone tasked with trying to create a viable mix from the live multitracks, the extra energy that artists often bring to stage performances can still make live recordings very worthwhile.
So just how should you approach such a task? At the PLASA show in September 2019, at London's Olympia, SOS teamed up with Headliner magazine to stage a live-recording seminar that addressed that question. The idea was first to capture a multitrack recording of a full band performing in front of an audience, with both a PA and stage monitors to ensure there'd be plenty of 'real-world' spill to contend with. And then a skilled mix engineer would carry out a mix straight after the performance, explaining in real time their technical and creative decisions to the live audience.
The band to be captured on the day was alt-jazz outfit VC Pines, a signed act with current releases. Their line-up meant we'd have plenty of loud sources: lead vocalist Jack Mercer was accompanied by two guitars, a bass guitar, acoustic drums, a trumpet and a trombone. And mixing the multitrack recording of their performance would be renowned Miloco Studios engineer Jamie McEvoy.
The live PA consisted of a pair of medium-sized pole-mounted tops and twin sub-woofers designed and manufactured by Shermann Audio, and operated at typical small-venue performance levels (both for the benefit of the audience and to leak a realistic amount of PA spill into the stage area). An Allen & Heath dLive SQ7 mixer, ably operated by engineer Andrew Mansell of PA company Sono Vie, controlled both front-of-house and stage monitors.
The layout replicated a typical small club venue stage, with a low central riser for the drums and backline arrayed across the back of the stage, in line with the drummer's seat. In keeping with the increasing use of in-ear monitoring (even in small venues), a couple of channels of Shure wireless in-ear feeds were in use, in this instance for lead vocals and one instrumentalist. Two front-positioned wedges provided conventional loudspeaker monitoring for those not on in-ears. Backline consisted of a Fender 65 Princeton Reverb reissue and Hot Rod Deluxe for the two guitars, with an Ampeg B-15 for the bass.
Microphones were supplied by Shure, and deployed as follows: an SM81 cardioid capacitor mic as a mono drum overhead, a KSM 137 small-diaphragm capacitor model on the hi-hat, Beta 98 clip-on capacitor mics for the snare and rack and floor toms, and, for the kick, a Beta 91A half–cardioid capacitor boundary mic inside the drum, and a Beta 52 super-cardioid model outside. Special-edition black Beta 58s (supercardioid dynamics) were employed for the brass players and the two guitar combos, as well as lead and backing vocals. The bass was captured via both a DI and a line-out from the amp.
For the subsequent mixing session, Jamie's listening position was served by a pair of Genelec 8361 ('The Ones') monitor speakers, set up as local nearfields, and the seminar audience was able to listen to playback via a giant pair of Genelec 1237As.
In any multitrack live recording scenario there's always the question of exactly where you'll be getting your audio from and, as we've alluded to above, if you're capturing a live performance in front of an audience the needs of the live event come first.
If you've ever watched old concert footage from the '70s, it's quite common to see mics gaffer-taped on top of other mics — that's a quick and dirty way of getting an independent feed for a recording without cluttering up the stage with extra mic stands. Of course, you don't necessarily get to put the recording mic exactly where you want it, and unless you're very careful to ensure that the two bodies of the mics aren't in electrical contact with each other, you can very easily generate earth-loop hum in the signal to one or both destinations. You've also got twice as many mic cables cluttering up the stage with this method.
Better, then, to share the signal from the stage mics where possible, using an active electronic or passive, transformer–based isolation/splitter to send an independent output to both the PA and recording systems. This might once have meant accepting a slightly compromised quality of signal from, in the main, robust dynamic mics designed primarily to withstand the rigours of stage use, but that really needn't be the case now. Today's 'stage mics' offer markedly superior sound quality to those of even a couple of decades ago, and their directional characteristics and proximity effect can also be a help rather than a hindrance in capturing more isolated signals.
Better still, in most instances, is picking up signals from the PA mixer, using either the direct outputs of an analogue mixer or the multi-channel digital output available on all but the smallest of digital PA desks. These outputs will be after the channel's preamp but usually (and ideally) before any EQ or dynamics processing; the settings of either for PA purposes will rarely be what you'd require for a multitrack mix, so you want to avoid having them 'baked into' the recordings.
For this workshop session, Jamie McEvoy had no say over the on-stage miking — he recorded clean feeds of the PA mics from the Allen & Heath mixer (via its in–built 32x32 USB audio interface) to his MacBook Pro running Pro Tools. But Jamie didn't seem fazed by his lack of influence over the miking: "Whenever I do anything like this, I generally don't get a say [about the mics]. The live engineer is going to do whatever they need to do to get a good live sound and work around any limitations they're faced with. I have to respect that. I thought the 'raws' sounded great, so I had no complaints."
After the performance, the focus of the event shifted to Jamie's listening position, and he started to consider what those 'raws' offered him. He'd recorded 13 audio files: a lead vocal, a mono drum overhead and one track each for the kick, snare, hi-hat and rack and floor toms, separate DI and amp (line-out) signal for the bass, two guitars, and trumpet and trombone.
As well as applying processing to the master stereo bus (of which more below) Jamie used a variety of techniques on the different sources. But really, it was all about assessing the sources and making decisions. "We actually had really good separation," he said to us later, "so it was more a matter of controlling the sounds and then adding things in order to make it feel like what we heard when the band was on stage. It's definitely a case of assessing [it all] when you have the sources in front of you."
There were some recurring tactics that are well worth highlighting. So too is the fact that Jamie's approach to controlling sources and "adding things" seemed to be fairly systematic...
Jamie McEvoy: "I use a lot of distortion when I mix... I often find it more useful than compression because it shapes tone as well as controlling dynamics.
By way of example, let's consider how Jamie approached the lead vocal, for which his broad aims were to convey the sense of excitement he'd heard in the on-stage performance, but with a more controlled result suitable for a commercial record.
Processing on the main vocal track's insert slots kicked off with an instance of Waves Rennaisance Vox, acting as both gate — to keep the spill from the PA, stage wedges and other instruments down — and a light compressor. Next came FabFilter's Pro‑Q 3 EQ, largely to remove unwanted information (a high-pass filter at around 90Hz was intended to keep that part of the spectrum clear for the kick and bass, while several cuts in the low mid-range, the largest about 10dB, tackled unwanted resonances). But some gentler, broad boosts around 2 and 7.5 kHz also brightened the part a little. The signal then flowed through Massey's DeEsser (whose role should be obvious!), before being treated to another EQ (Waves VEQ4) and a compressor (Massey CT5) with a medium attack and fast release.
Essentially, then, the first half of that chain was corrective, and the second more about tonal shaping. But it was only part of the story, because this processed vocal was sent to several stereo aux busses, each dedicated to a different task and named accordingly: Dist, Low, Wide, Reverb, Slap and Delay. Those busses' faders would be used to reinforce the main vocal part with specific characteristics.
The Dist track hosted an instance of Soundtoys' Decapitator set to its 'E' style (modelled on the distortion of an EMI preamp). Why parallel distortion? "I use a lot of distortion when I mix," Jamie explained. "I often find it more useful than compression because it shapes tone as well as controlling dynamics. I prefer to do these things parallel in order to not affect the main sounds and to give me more control over the parallel sounds."
The Low track's chain centred on Waves MaxBass, a plug-in that trades genuine low end for harmonics derived from that low-frequency information. The perceived effect is to reinforce the low end but in a way that works on smaller speakers, and which neither eats up mix headroom nor treads on the toes of other bottom-end instruments. This was followed by Waves Renaissance Axx (as an electric-guitar compressor but perfectly usable on other sources). A fast–ish attack and low threshold kept the new bass information under control.
There were then four different effects sends. For the Wide track, Jamie used Soundtoys' Little MicroShift plug-in — a tool, based on algorithms from Eventide's H3000, that's designed to add width to mono sources via short and slightly pitch-shifted delays. Reverb came courtesy of Valhalla's VintageVerb and slapback delay via Massey's TD5 tape-delay emulation, with another, longer delay courtesy of Soundtoys' EchoBoy.
Each parallel processing/effects chain concluded with its own EQ to tailor the sound, and with all these effects and processes putting different characteristics on different faders in Pro Tools, Jamie could quickly and easily balance and tweak things.
The last stage for the vocal processing was a stereo group bus, to which were routed the main vocal track and all its parallel processes and effects. On this bus was another instance of Pro‑Q 3, with a circa 100Hz high-pass filter, a few small dips in the low mids and more broad top-end boost, running up from about 1.5kHz to peak at 10kHz. This was set up to tailor the sound feeding a Waves CLA‑2A plug-in (a Teletronix LA-2A optical compressor emulation). Finally, this vocal master bus was sent to a general-purpose parallel distortion track for the whole mix — of which more shortly.
Now, you obviously couldn't apply exactly the same processing to every source in every mix, particularly when it comes to things like compression and EQ. As Jamie put it, "I don't have any set methods here. I'll tweak a compressor or an EQ to whatever way suits what I'm trying to achieve." And that's one reason we've not gone into detail with precise settings for each and every plug-in. But while the plug-in choices and their settings varied from source to source, Jamie was clearly thinking in similar terms about what each chain was intended to achieve — and hopefully our account of the vocal treatment gives you a sense of how he approached everything else.
The bass guitar, for example, was similarly treated to both parallel distortion and a widening send effect, but both busses used different plug-ins from the vocal — Pro Tools' bundled SansAmp plug-in and Softube's freeware Saturation Knob for the distortion, and TAL's freeware TAL-Chorus-LX (which emulates a Roland Juno synth's chorus effect) providing stereo spread. (Jamie remarked that "I find chorus to be an amazing tool for stereo width in parallel," and we'd agree; used this way, it can lend a mono bass part a lovely sense of width and presence without compromising its bottom-end solidity.)
We can't stress enough just how organised Jamie's Pro Tools session was, which you'll notice instantly if you flick through the various screen captures he's provided (see the link at the end of this article). The same structure, with processing on a source track complemented by parallel processes, effects sends and subgroup processing, was applied to all the sources, with clear labelling and colour coding facilitating easier navigation and thus faster mixing. Looking over the screens, you'll see that pretty much every source's insert processing was a combination of EQ, saturation and dynamic-range control, sometimes with dedicated processors for saturation, and sometimes with distortion coming courtesy of an analogue-modelling EQ or compressor.
Another observation is just how much use Jamie made of EQ cuts and, particularly, of high-pass filters. The vocal part was subject to more than one stage of such filtering, and almost everything, including the parallel tracks, was high-passed. We wondered if this tactic was due to the on-stage nature of the recording, but apparently not. "To be perfectly honest,"says Jamie, "I use lot of high-pass filters in general, be it live or in the studio. I tend to do it on anything that doesn't need sub, in order to clear room for anything that does. I find it's easier to get a clearer low end this way." And while he was clearly conscious that such filters have the potential to cause problems, he suggests that this risk can be overplayed. He goes on: "I occasionally get phase issues, but not enough to cause real problems. [If it is problematic] I tend to either play around with polarity flipping or adjust the filter until it sits better."
One thing that can undoubtedly be attributed to the live nature of this recording is the amount of top-end boosts that were applied, and how little low end was added, other than via parallel processing, as described above in relation to the vocal. For example, as well as the multiple high-end boosts on the vocal, there was a generous (circa 12dB) boost on the trumpet centred around 5kHz, and another, smaller boost around the same area on the trombone. The snare was subject to perhaps three stages of mid-high boost, and the hi-hats and overheads too were treated to an assertive boost in the 5-7 kHz area, whereas only really the kick and bass were subject to any boosts at the bottom end — often a broad boost accompanied by an adjacent narrow cut.
On the master stereo bus, Jamie set up a compressor (Waves API‑2500), which was flanked pre and post by EQs. As you'd expect of a bus compressor operating on a busy, transient-rich mix, this one was configured with a relatively slow attack, a gentle-ish (3:1) ratio with a soft knee, and a fairly fast release, and was not applying masses of gain reduction. The EQ before this compressor was set to notch out a couple of resonances that seemed to be emphasised by the compressor, as well as pushing the 1kHz area very slightly, and just lifting the top end a little — no more than a couple of dB sloping up from around 5-6 kHz. The post-compressor EQ, a Maäg EQ4 emulation, added yet more high end, with a 2.5kHz shelf applying a tiny boost, and a 20kHz 'air band' bringing things up a little more.
That was all fairly standard stuff — final tweaks to sweeten the mix. And, again, the precise settings depend very much on what you hear. But a more novel tactic was a dedicated mix-distortion bus. This was routed directly to the master stereo bus, and was fed by sends from all sources' subgroup busses. The aim, of course, was to provide an overall 'excitement' fader for the mix, the excitement being added via Pro Tools' bundled Lo-Fi plug-in, again followed by an EQ to focus the added energy where it was wanted. This track wasn't very high in the final mix, but when muted the difference in energy seemed obvious.
When analysing an engineer's approach to a mix, you could fill a whole book! If you want to analyse Jamie's approach in more detail, you can download screens from his Pro Tools session and some audio clips from https://sosm.ag/plasa-mix-jamie-mcevoy-media. Hopefully this discussion has already given you a good feel for how he assessed the tracks and made decisions about what needed to be done to fashion a balanced mix which also delivered the sense of excitement that VC Pines exuded on stage.
So, is mixing a live recording so different from mixing anything else? Well, yes and no. Where it differs is that you'll rarely have the luxury of a spot mic for every instrument, and the various mics might very well not have been set up to capture the perfect balance for a record, so you might tend to EQ more assertively than is typical (though there are plenty of us who'd not be shy with EQ even in a studio setting). But as Jamie demonstrated so well, it's still pretty much the same process of listening and making judgments, of understanding the tools at your disposal, and choosing those tools with a clear purpose in mind. Of course, it all relies on ensuring you capture a good multitrack recording in the first place — and, hopefully, the first part of this article equips you for that too.
On the day we weren't recording the sound of the audience — it wasn't a typical gig environment, and there were booths at the show testing speakers and other sound gear nearby. But if you're tasked with mixing a real live show, particularly one for video, you might want to consider whether and how to capture and mix the sound of the audience.
Jamie suggests that there is no particular challenge to the process: "Just approach it like any other sound, really. Figure out how you want it to sound and then work towards that in whatever way you think will work. Pencil mics can be great on either side of the stage, facing the audience. Again, distortion used subtly can really bring out a crowd's excitement. If I'm doing a full gig, I will automate down during the songs and back up in between — I find muting completely feels really weird."
Visit the Media web page that accompanies this article to download hi-res WAV versions of Jamie's VC Pines 'Indigo' live mix, processed stems and multitrack files. Plus a collection of hi-res large screenshots of Jamie's Pro Tools session.
If you fancy a crack at mixing this song yourself, PLASA in association with Sound On Sound, Headliner magazine and Miloco Studios, are running a mix competition using the multitracks from this performance. Closing date is 13 January 2020 so hurry! More info below: