Considering the best way to mix in a small modern studio leads Cutting Edge to speculate on a possible future for the interconnection of digital audio devices.
I've been trying to make what should be a very simple choice: what sort of small mixing device should I buy to use in my small computer music facility? It's thrown up all sorts of questions for me, and a few insights about where we're heading with music technology.
For a few months I've been using a USB audio interface for my I/O tasks, which, because I've been travelling a lot, have been fairly simple. My priority has been portability and flexibility, so I've been using a variety of computers — mostly laptops — and have tried to avoid having to open up desktop computers to insert PCI cards.
While all this works fine in theory (and, mostly, in practice), I now realise that it's not the way I really want to work. I'm not sure exactly why, but I suspect it's because I miss the 'look and feel' of a conventional studio environment. I feel like I'm going to work better in a space that's permanently set up, and that I don't have to share with the rest of my workload. But the last thing I want to do is fill up any of my precious living space with cable looms, MIDI leads and synth expanders. (I won't have to when we have Ultra Wideband wireless networking). Nevertheless, I need to be able to connect pretty much anything I want to my system, and have the highest possible quality of signal path. The facility for 5.1 or higher mixdown is also important to me, which means that surround monitoring is a necessity as well.
On top of all this, I really, really don't want to spend a lot of money. I'm not aiming to set up a commercial facility, nor even to produce music for professional or broadcast work. If I did, I'd do the project work at home and finish it off in a pro environment. But quality is important to me and I want the stuff I do to sound OK.
Pick & Mix
So the choices are: a small analogue mixer, a small digital mixer, or using a computer as a mixer, with just external I/O and perhaps a physical control surface.
That first option might raise a few eyebrows: this is the Cutting Edge column, after all, and analogue mixers are, like, soooo last century. But they do have a few things in their favour. They're easy to understand. They don't have latency issues. They're very versatile with I/O. They just work. And what I like most about them is that they act as a kind of universal translator. You can plug just about anything into an analogue mixer and get a result. No need to worry about digital formats, sample rates or word clocks. I really like that! Even now, I get calls from people with hideous digital sync and clocking problems, and if I don't have the time to sort them out I just suggest using an analogue mixer for the time being. They've worked for decades and haven't stopped working just because of digital mixers.
But, on the other hand, why not keep the whole setup digital, right up to the monitor amplifier? There's a good case for this too. Mixing on a computer, given a decent control surface, is an attractive option. Processing power is now so abundant that you can design your own mixer, and then control it via a quality external device with motorised faders. The only drawback, perhaps, is that control surfaces tend to be a bit generic, leaving some head-scratching moments while you work out which function is assigned to what control.
This would certainly be my favourite option for a very small setup. But only if I wasn't using much external kit, such as keyboards, expanders and outboard processors. I think the 'doing everything including mixing on a computer' approach works best in conjunction with software synths and effects, if that's not stating the obvious. Then you only need as many channels of I/O as your final mix output format dictates.
But my nagging problem is that I've got a whole bunch of stuff and some of it doesn't even use electricity, never mind being digital. I don't use the older kit much, but when I do I don't want to have to go through all sorts of dramatics just to be able to record it.
In fact, what I'd really like is Mackie's DXB. It's a serious-looking — and no doubt serious-sounding — piece of kit, judging by the report in last month's SOS. But, given that the projected price of that is over £10,000 I think what I'm going to go for instead is by far the cheapest option: a really good professional multi-channel I/O card, and a small analogue mixer. It'll still sound good, it'll just work, and it won't tax my ever-diminishing mass of grey matter when it all goes wrong.
Latency Crystal Display?!
Sometimes a new category of product is just so good that enthusiastic new users completely miss a fundamental problem with it. Such is the case with LCD televisions, which, when used for syncing audio to video, pose a serious problem!
Seen by most people as an unreservedly good thing (great picture, occupy less space, low power consumption, etc), LCD televisions are flying out of the shops, even though prices still have a long way to fall. The media industry is starting to use them as well. The following example is from the world of video, but anyone writing music to sync to video, or even considering making a music video, should read this cautionary tale.
I was recently called to a television production company in London to give an opinion on a problem that had been bugging them for a couple of weeks, ever since they had taken delivery of four new Avid Xpress Pro systems. The issue was that the audio seemed to be coming out of the system ahead of the video, by around three frames, or 120 milliseconds.
Now, I know that Avid wouldn't release a system that had such a fundamental flaw, and neither did I think that there was any kind of computer system issue. Rendering the timeline to a DVD-type MPEG-2 file proved that the information in the project was correct: sound and video played back fine on the computer screen and the computer's loudspeakers. But there was definitely a three-frame delay on the output monitor: enough to make lip-sync look less than solid, and completely useless on close-ups involving one thing hitting another. Chopping carrots highlighted the problem, and a close-up of a drummer in a music video would have been disastrous!
Then it occurred to me: LCD screens have a notoriously slow response time compared to CRT-based televisions. But even this couldn't account for such a serious audio offset. I was on the right track, though, because what was actually happening was that the LCD television had an intrinsic processing delay on the video of around 120ms! (As you'll know if you've ever used software synthesizers or samplers, that's around 10 times what's acceptable to a musician). Substituting the LCD television with a CRT-based one immediately corrected the delay.
You may be tempted to think that, as a musician, you won't be bothered by this problem. But if you are one of the increasing number of composers experimenting with writing music for video, this could be a very serious problem. One solution is to run all the audio through the LCD television. The TV, of course, has to delay the incoming audio to keep it in sync with the video. I found that this worked in practice, but of course the sound quality suffered: domestic TVs are not designed for studio-quality audio. This workaround isn't even a perfect solution. The worst manifestation of the problem is when an editor tries to make cuts in a video based on the beats in a music track. You'd do this is by listening to the music and hitting the marker key on the keyboard in time with it, to place markers on the timeline. Even if you're monitoring the audio though the problematic LCD TV (in which case the audio will be in sync with the picture you're watching), it won't be in sync with the timeline on your computer!
I think we're going to come across this problem a lot in the next few years and months. Before long, people are going to be buying large-screen LCD TVs and using them with surround systems in their living rooms. No matter how good their surround system is, it will be out of sync with the picture!
I apologise for dwelling on this, but it's an issue that anyone involved with music and video should be aware of. It would be so easy to spend weeks on a complicated project, only to have to do it again because of such a fundamental problem.
All of this is just a preamble to the real point I wanted to make this month, which is that digital audio devices seem almost to be 'merging' together. It's actually rather difficult to explain this, but I'll have a go.
Until very recently, different pieces of kit were connected using wires that carried signals. I know I'm stating the obvious again, but stay with me! What I mean by 'signal' is either an analogue one, which really needs no further explanation, or a digital one, where the 'signal' is some kind of synchronous or self-clocking series of bits. The important thing to note here is that the digital signal is 'clocked' along the wire at a rate that is proportional to the sample rate. The benefit of this is that you always know that what goes into the wire is going to come out in a timely fashion, and in such a way that you can even slave the receiving equipment's clock to the incoming signal.
But now there's another way for a signal to travel down a wire. Audio can be sent as data, rather than a signal. Now, these two terms are certainly not mutually exclusive, and I should add that this characterisation is just the way I see things. But I do think that there's a very important distinction to be made, for this reason. When you connect two devices together with a data path, as opposed to a signal path, what you do is extend what I call the 'processing space' between the two items. In other words, you create a distributed processing region that includes both or more pieces of kit.
Another way to look at it is to say that we can send both audio media data and data associated with the execution of DSP programs (or even the programs themselves). This is very significant, because what it does is extend the essence of a process beyond the physical limits of the equipment. There are several examples now where audio devices use, for example, FireWire as a data connection. This allows digital mixing devices to be integrated with host computers much more closely than before. It lets you run VST applications on dedicated hardware and still allows you to integrate them as plug-ins in your sequencer.
As data connections get faster (as they will with new structures like PCI Express), we're going to see more and more distributed processing going on, as mixing desks, computers, keyboards and even mobile phones offer to join in the fun. And once you get used to the idea of an extended processing space, you can start to think about a wider I/O space, where devices in a studio will offer their I/O to any element of the studio that might need it.
These are early, highly abstract, and quite possibly nonsensical thoughts, but I think we will start to think of studios more in these terms. Such extended processing spaces will also exist in time as well as space — and before you think I've gone completely mad, perhaps I'd better explain what I mean by that! The biggest enemy of distributed audio processing is latency. The more devices you have, and the more complex the data paths between them, the greater the potential for latency. It may well be that we have to consider dividing a studio into different 'time zones' where, for example, a particular setup might be valid for mixdown, but not for live recording. And time is an even bigger problem when you have to sync audio to video, as you'll see from the anecdote in the 'LCD' text box.