There's more useful stuff than you can shake a stick at this month, including advice on input monitoring, ways to get around latency issues, and news of CD-burning and audio networking applications to make your life easier.
Over the last couple of months we've been looking at monitoring: what it is, and how to make it work for you in Digital Performer with a range of typical studio situations and hardware setups. In this final instalment of our monitoring extravaganza, DP5's Input Monitor function and aux tracks come under the spotlight, and I offer suggestions for incorporating hardware effects units into your DP-based rig.
You could be forgiven for thinking that monitoring is only relevant for recording — specifically, for supplying a headphone backing-track mix to an artist recording a performance. But it encompasses more than this, especially in the modern studio integrating hardware and software synths and effects.
To illustrate what I mean, imagine Reason and DP are running on the same Mac. The Rewire link between the two applications carries audio from Reason to DP, but in order to hear Reason at all some sort of 'open input' is needed in DP. An aux track is often perfect for this, bringing the Reason audio into DP's Mixing Board, and allowing it to be mixed alongside other track types. What the aux track is really doing is monitoring the live signal from Reason, allowing the engineer to hear it while working on other aspects of the mix, just as a singer would hear a backing track as they recorded vocals.
These 'open inputs' are useful for all sorts of things, not just Reason. You could use them to monitor the return from a hardware effects unit, the signal from a hardware synth, or a signal from another Mac arriving digitally via an ADAT connection or across a network. All these need to be monitored as you work on your track, until you choose to record them and capture their signals to an audio track. So what are the options available to you?
Aux tracks: Aux tracks are a long-standing DP feature. You can't record on them; instead they're a lot like mixing desk channels. They can be fed with inputs from your audio interface, DP's internal busses, or the Rewire signal from other software — all of which is useful from a monitoring point of view. What's great about them, too, is that their operation is not affected by DP5's Audio Patch Thru mode — you could have it set to Off and they'd still work. Also, if you have a MOTU audio interface and are using Direct Hardware Playthrough mode, they still allow you to monitor your input through any effects plug-ins instantiated.
Audio tracks: You've always been able to monitor input signals through audio tracks, courtesy of DP's Audio Patch Thru feature, by simply record-enabling them. But in recent versions of DP you have more flexibility, thanks to the Input Monitor function. This means that an audio track can be made to permanently 'patch thru' its input to its output, whether the track is record-enabled or not. Just click the track's Input Monitor button in the Tracks Overview (the 'Mon' column), Sequence Editor (a loudspeaker icon in the track's info pane) or Mixing Board (the Input button next to Rec, Solo and Mute). An Input Monitor-enabled audio track is not quite the same as an aux track, though. First, it does respect DP5's Audio Patch Thru mode, so if that's set to Off the Input Monitor function is essentially disabled. Also, if you're using Direct Hardware Playthrough mode with a compatible MOTU audio interface, Input Monitor will not run the signal through DP (or any effects plug-ins on the track), but instead set up a temporary CueMix zero-latency routing.
Zero-latency hardware monitoring: There's nothing to say we have to control all monitor signals from external hardware with DP. In many cases the best approach is to monitor external effects returns, and especially hardware synths, via a hardware mixer or an audio interface with zero-latency monitoring, as you might when setting up monitoring for a vocal take. The external hardware signals can be incorporated into your control room mix independently of DP, until you want to record them into your track. At this point you just record-enable some audio tracks in DP and route them in.
Here's a situation I've mocked up that incorporates all three types of input monitoring, using my own setup of a Power Mac G5 and MOTU Traveler (which has CueMix zero-latency monitoring). I'm running DP 5.13 and Reason 4, and also have some external hardware: a Korg Radias synth and a Yamaha REV500 reverb. This is how everything's co-ordinated (see screen at start of article):
Monitoring Method: Input Monitor-enabled audio track.
Description: As it's providing some synth and percussion parts, I want to monitor Reason constantly as I work on my song. Using an audio track with Input Monitor allows me to do this, and when I'm nearing completion I'll record-enable it to record the signals as audio in DP
Input: Radias synth
Monitoring Method: CueMix
Description: As I'm using the Radias to provide the basis of my arrangement, including bass and key rhythmic parts, I want to hear it completely free of latency. Hence I've set up hardware monitoring in the CueMix Console: the Radias signal is not coming into DP at all. But when I've finished my song, I'll record the separate parts into DP on audio tracks for the final mix and any further treatment.
Input: REV500 hardware reverb
Monitoring Method: Aux track
Description: The REV500 is patched into my setup so that it's fed by one of the Traveler's outputs (with a corresponding Aux send in DP) and returns back into a couple of inputs. I'm bringing its signal into an Aux track so I can further treat it with a MAS plug-in. I suffer some latency because of the round-trip out of DP and the Traveler and back into DP, but it's not a problem, as it simply becomes a bit of additional reverb pre-delay. If I need to record the REV500's signal later, I could route the aux track via a bus pair to a record-enabled audio track.
This all works great for me as I'm developing my song. For Audio Patch Thru I'm using the Blend mode, and I've set a 512 buffer size. The mix of hardware and software monitoring presents no problems here. But it's not always so easy...
As we've seen over the past few months, latency can be a problem with certain approaches to monitoring. Musicians generally hate it if the headphone monitor mix of their live performance is anything more than a few milliseconds late — which is why zero-latency solutions like suitably-equipped interfaces and hardware mixers are routinely used for monitoring. But what if you don't have access to one of these, or if, for other reasons, you must monitor your external gear through DP and don't want latency?
In the case of monitoring synths and samplers driven by MIDI, DP automatically sorts out latency issues for software synths it hosts, or those coming into an aux or audio track via Rewire. So even if you use a large buffer size, such as 1024, playback of DP-hosted or Rewire-connected synths will be perfectly in time. But you'll still need to use a small buffer size to get a crisp response when playing them live from your controller keyboard. The same goes for hardware synths monitored in DP, but once their MIDI tracks are in place you can compensate for the latency associated with switching back to a large buffer size by making the MIDI play early. Just put a Time Shift plug-in on each MIDI track and set the track to play early by an appropriate amount. You can start by defining it in samples, to match your buffer size, before fine-tuning further.
It seems such a nice idea, to have your favourite hardware compressor or reverb unit patched into your audio interface, ready to be addressed via an aux send on your audio tracks, and yet latency often spoils the party. In my earlier example, where I'm using my REV500 for some reverb, I can take a bit of latency on its signal because it comes out sounding like pre-delay on the reverb. But for a true processor treatment, like EQ or compression (or if I just don't want any pre-delay on a reverb) any amount of latency is completely unacceptable. Even a few milliseconds could mess up the musical timing of the mix or produce nasty phasing in some circumstances. And a zero-latency monitoring scheme doesn't fix it, because there's latency inherent in the signal passing out of DP. What's needed is some sort of latency compensation.
Now, if you're thinking DP5 has built-in latency compensation you're right, but it's only for hosted plug-ins, not external routing. However, a freeware Audio Unit plug-in does exist that does exactly what's needed for external routing latency compensation: it's Latency Fixer, from www.collective.co.uk/expertsleepers and it's a nifty little thing. It works by first reporting a latency (which, using the controls, you set manually in terms of seconds or samples) to DP. DP then compensates for this latency by sending track audio to the plug-in ahead of time, but the plug-in actually applies no delay at all to the audio passing through it. Consequently, if you place it on an aux track that's being used to feed your audio tracks to an external effects unit, the latency accumulated in the trip out of DP and back in again can be precisely compensated for — after a bit of experimentation. The screens above show how I use it to route audio tracks to my external hardware compressor, the output of which comes right back into my DP Mixing Board.
In the August and September 2007 Performer workshops I looked at some ways in which DP can be used to prepare an album-length project for CD burning. DP's multitrack audio capabilities, flexible effects and automation make it very good for this task, even though it doesn't have any features specifically designed for it. DP can't do CD-burning, so you have to transfer the resulting audio files to another application, and it's here that you can have difficulties. For example, the applications that can work with DP's native stereo audio format, Split Sound Designer II, are either discontinued (Roxio's Jam), not available separately (Apple WaveBurner), don't read region information (i3 DSP Quattro) or are very expensive (Bias Peak Pro and Sonic Studio PreMaster CD). Wouldn't it be great to have an audio editor that could load any audio format, including SD2, correctly read region information, and offer heavyweight editing, dithering, export and burning options? Now, in the form of an updated Wave Editor by Audiofile Engineering, you can.
Wave Editor 1.3 is a thoroughly up-to-date application, utilising OS X's Core Audio features and presenting a slick, customisable user interface. Its really unique feature is a multi-layer (as opposed to multi-track) approach to editing, whereby you can assemble on a time-line different sections of audio at any sample rate and resolution, applying fades, crossfades and other processing on or between each layer. When the time comes for burning a CD or exporting your audio to another format, Wave Editor treats any layers at differing audio specs with iZotope's highly-regarded SRC sample-rate conversion and MBIT+ dithering. For DP users (and others) this approach borders on the revelatory — you'll almost never need to think about dithering your 24-bit projects for CD again, and if you're compiling mixes done at different sample rates you can let Wave Editor deal with that too.
If you like to burn your CDs from multi-region audio prepared in DP (as described in the September 2007 column), Wave Editor (below) offers a straightforward workflow. After opening your multi-region SD2 files, make sure 'Regions' is ticked in Wave Editor's Waveform menu. Then, in the Labels drawer, select all your audio's region labels and right-click (or control-click). Choose Convert To / Tracks, and you instantly have a burn-ready document, with CD track boundaries where your DP region boundaries were. What's more, everything a mastering application should be able to edit — CD-TEXT metadata, ISRC and UPC/EAN codes, DDP annotation and PQ subcodes — can all be edited, either by clicking on individual tracks in the Labels drawer, or entering data in the Properties palette. You finally burn your CD by choosing Burn Disc in the File menu.
Wave Editor might not have the instant user-friendliness of consumer-level applications like Toast and Jam, and a read through its PDF manual is a must for the first-time user. But it's hard to imagine a more powerful, flexible and useful audio editor — certainly not one that dovetails so well with DP, and is so affordable. Wave Editor comes as a download from www.audiofile-engineering.com, and costs $250 (but only $200 until 31st March). If you're eligible for educational pricing it's only $100, and a $150 crossgrade is available for owners of most other major audio editors.
Writing the monthly Performer workshop can occasionally feel like something out of a Dickens novel; long, dark evenings spent with just the plaintive chirps of my G5 Power Mac for company, and only a red-hot Macbook for warmth. So it's always nice to get some feedback from readers, and I was especially pleased to hear from Pete Townshend (of the Who fame) recently. He very helpfully drew my attention to a little freeware application I hadn't come across, which might be of interest if you're using (as Pete does) a studio network of Macs to offload various processing and virtual instrument duties. It's called Soundfly and comes from Abyssoft, the company that also makes the superb Teleport 'mouse and keyboard sharing across a network' software. Soundfly exists to send audio from one Mac to another, across an Ethernet, Airport or other network connection. It relies on Cycling 74's Soundflower inter-application audio utility, so you need that installed first. It also utilises two of the Audio Units built into OS X — AUNetSend and AUNetReceive — though you never interact with these directly.
In use, Soundfly is very straightforward. On the Mac from which you want to send audio — perhaps one running some stand-alone soft synths — you run Soundfly. On the Mac that needs to receive this audio you run Soundfly Receiver. Both applications are so simple that normally they don't even have a user interface. But you can force one to appear by holding down the Alt key as you launch each application, and I find it's useful to do. First off, you can configure the audio format used by Soundfly to broadcast across the network. A 'full monty' uncompressed PCM format is selectable, but in case your network can't cope with that much data throughput various compressed formats, like the dependable AAC, can be chosen instead. Some experimentation may be needed to find what works best for your network. I also found I needed to configure Soundfly Receiver on my G5 in order to manually make the connection with Soundfly running on the Macbook. This is straightforward: just select the audio stream from Soundfly in the directory list, and click Connect.
Soundfly won't give you a multi-channel audio connection between your Macs, only a stereo one. Also, the Soundfly connection can't be brought directly into DP's mixing environment on the receiving Mac. But it's still a useful little thing, and across a wired network that isn't bogged down with other traffic it can operate with low latency and dependable audio quality. It can be downloaded free, with the option of making a donation, at www.abyssoft.com. Don't forget to install Soundflower, from www.cycling74.com/products/soundflower, if you don't have it.