The GMICS home page at www.gibson.com/products/gmics/th....
Gibson are best known for their good old‑fashioned guitar technology, but they're also behind an innovative new standard for connecting digital audio devices — which, if it takes off, will revolutionise the way we connect gear together. Dave Shapton finds out more...
News this month of an intriguing new digital audio interconnection standard. Called GMICS (Global Music Instrument Communication Standard), it is designed to replace audio cabling for live instruments, including guitars! This proposal seems important enough to spend some time looking at the techniques and issues behind it in detail.
I've always been interested in new ways to connect things together, especially when digital audio is involved. At the Spring AES show in 1987 the first details of a new audio connection called MADI (Multitrack Audio Digital Interface) were revealed. Capable of carrying around 56 channels of digital audio, it had the potential to kick‑start the digital audio revolution by simplifying wiring and interconnection in an all‑digital studio.
Well, MADI is still around today (Euphonix's just‑announced S5 digital desk uses it) and has a useful role to play; but is certainly not the patch‑free panacea I hoped it might be. There are several reasons for this. It was expensive to implement, but that's not such a problem now. No, the biggest issue with MADI is that it is a point‑to‑point protocol. It's very good at delivering dozens of channels from a digital multitrack to a digital desk, but if you want to insert or break out connections along the way, then you are out of luck. In fact, the only way I can think of to do this is to decode every channel to AES‑EBU (which is the core protocol within MADI), extract the wanted channels, and then recombine them the remaining ones. This, as it sounds, would be expensive, messy, and ultimately pointless.
There are other possible approaches — although, surprisingly, there is still no universal standard for multitrack digital audio connection. Alongside the de facto MDM (modular digital multitrack) formats such as (Alesis) ADAT and (Tascam) TDIF, there are high‑speed networks. Where multitrack digital‑audio interconnect formats like MADI and ADAT lack versatility, networks excel. Although you can use a network for a point‑to‑point connection, every device on a typical network is individually addressable, and can send and receive its own data‑stream — probably to every other individual device, as well.
Drawbacks? There are several, and they all involve timing. The trouble with networks is that they are intrinsically unpredictable. Any device on a typical network can send data at any time. When two devices transmit at the same time, a data collision occurs, so they back off and try again. The network hardware on each device waits a random time and sends the data again, thereby avoiding the collision — although the transmission may well collide with something else this time. Note that the problem here is not one of data rates. Getting the right amount of data across a network is just a matter of using a fast enough network in the first place. Even though there are collisions and other overheads, you can be reasonably confident that a constant amount of data will get through in a given time. The real issue is that, although you can predict the amount of data that will get through, you can't say exactly when it will arrive.
This leads to a problem familiar to anyone who has tried to overdub using a computer sequencer and a soundcard: latency. To receive real‑time data over a network in the right order and with no errors, you have to buffer the information as it arrives. A buffer is like the departure gate at an airport. Passengers are told which gate to assemble at, where a final check on numbers is performed. They are told to arrive early, in the hope that they will all be there when the plane departs. Even if there are stragglers, everyone else is ready to go as soon as they turn up. This works well. But, just as everyone is irritated by having to arrive ridiculously early for a flight, latency annoys musicians and recording engineers, and in some cases, makes life impossible. Imagine having to sing in time with a backing track knowing that your voice was going to be delayed by a quarter of a beat as it was being recorded.
Gmics To The Rescue?
Sun Microsystems' web site hosts comprehensive information and developer resources for the MAJC project.
GMICS is a proposal for a standard that combines some of the properties of networks with the beneficial characteristics of 'conventional' digital multitrack links. Remarkably, it overcomes the biggest disadvantages of both. From the outset, it is clear that GMICS is a serious proposal. The basic specification allows 16 tracks of up to 24‑bit audio, and high sample rates (although the track count is halved if 192kHz is used — hardly a problem for live use!). What makes GMICS seem like a practical commercial proposition is that it is based around freely available computer network technology. This means that it is potentially cheap to implement and manufacture, and that research and development can concentrate on optimising its use for music, rather than inventing a totally new type of digital infrastructure.
But didn't I just say that networks pose problems for live music use? Yes — and to make matters worse, GMICS is primarily designed for stage work! In fact the company behind it is Gibson, known more for their guitars than their digital audio eminence.
Well, this is where it gets really clever. If you've ever tried connecting several digital devices together, you'll know that there are a few basic rules to follow. Don't mix sample rates. Don't mix formats. And don't expect it to work without a common sample‑clock. A digital audio system is like a gearbox: put the wrong‑sized cog in and something has to give. Either everything stops, or — for want of a more technical explanation — it goes bang. Actually, there is another possibility, which is that the cogs start slipping with a horrible grinding noise. All of these mechanical phenomena have their counterparts in the digital audio domain, and you wouldn't want any of them to happen during a live performance!
GMICS uses industry‑standard Category 5 ('Cat 5') computer cables with RJ45 connectors. These look a bit like telephone jacks, but are squarer and chunkier and, best of all, self‑locking. The cables have eight conductors, of which GMICS uses four to carry data; the rest are used to provide power, including phantom power. Crucially, the interface is bi‑directional, which means that a clock signal can be sent back to a device to synchronise it with the sytem. One device in the network is designated the System Timing Master, and acts as a hub (a typical STM device will be a digital mixer). Using this architecture, many digital sources can be plugged into a central device without synchronisation problems.
GMICS looks promising. It is a genuine and apparently successful attempt to combine all of the attributes necessary to make a digital audio system flexible and foolproof enough for live use. The same techniques can be used with very‑high‑speed optical networks, which will be able to carry hundreds of channels. But it also begs several questions..
Guitars and all acoustic instruments are analogue devices: typically, they do not provide a convenient stream of digits to send over a digital network. For any instrument to work with GMICS it will have to be fitted with analogue‑to‑digital converters. This is a fine theory, but given the almost mystical properties which guitarists seem to ascribe to their pickups, I wonder how many would willingly subject them to the vagaries of an analogue‑to‑digital converter. Mind you, they seem quite happy to chain together any number of digital effect pedals, each with A‑D and D‑A converters of dubious origin, and some with 8‑bit processing. This raises an interesting question: if you were to replace the Byzantine wiring that lurks on the average guitarist's pedal board with a pristine, transparent, all‑digital connection, would you be losing some intangible part of the overall sound? I guess you could always emulate it in DSP!
The other thing that concerns me is what happens if things go wrong. If part of your stage setup gives up the ghost in the middle of a set, you can fiddle with it, kick it, change cables, or get your roadie to fix it with a soldering iron (or blow‑torch). Personally I'd rather deal with that than have an error message saying 'Invalid channel arbitration sequence'. (I made that up, but I think it's a valid point.) These reservations aside, I think Gibson deserve kudos for proposing the GMICS standard, which is the closest thing I've ever seen to a digital audio counterpart to MIDI.
Low‑Level Lines
In the early days of personal computing (that's all of 20 years ago), programs were written in machine code, a language that a microprocessor could understand directly. Graphical user interfaces were just showing up on profoundly expensive computers such as the Apple Lisa, the forerunner to the Mac, and multimedia APIs (a standard software interface that can, for example, play a WAV file with a simple command) were science‑fiction.
Programming directly in machine code, rather than higher‑level computer languages such as C, remains the best way to get the maximum performance from a processor. Any software package that has an significant amount of real‑time signal‑processing capability is likely to be written using code that talks directly to the processors, rather than having translators (called 'compilers' or 'interpreters') to convert the thoughts of the programmer into low‑level processor commands.
I've just been playing with one of the best examples of machine‑code programming I've ever seen, and it comes from Avid, owners of Digidesign. Avid have been making non‑linear (hard disk) video editing systems for the last 10 years, and have a reputation for making professional, but eye‑wateringly expensive products. If you've ever done audio editing on a computer, you'll have no trouble adjusting to video editing. You'll find that the biggest difference is the amount of data that has to be moved around — as much as 23Mb per second in the case of uncompressed video. Needless to say, you need some pretty heavy‑duty hardware to cope this quantity of data.
Until now, that is, because Avid are just about to release a professional video‑editing system that needs no additional hardware. Called Xpress DV, it needs only a FireWire (IEEE 1394) I/O card to allow you to do loss‑free editing in the 'DV' format. The video picture you see on the computer screen is decoded in real time using your computer's own processor. Now, this technique is not new: the Pinnacle DV300 is a low‑cost video card that works on the same principle and has been around for a couple of years. What's different about the Avid is the sheer speed of it. Scrubbing the cursor along the video timeline is effortless. Playback is as good as you could want. In fact, I'd say it works better than some hardware solutions. Oh, and it plays back eight tracks of audio in real time as well!
A Touch Of Majc
Mentioning machine code brings me back to the Sun MAJC chip I talked about a couple of months ago. Details are still sketchy, but I've heard reports that the chip will be capable of a staggering 6 Gigaflops per second — this when Apple are crowing about breaking the 1 Gigaflop barrier with the new G4 chip in the latest Macs. To be fair to Apple, their chip is shipping, whereas you can't buy a MAJC processor yet. It's also not surprising that MAJC is several times faster, because it actually has several processors on one chip. Sun's recent disclosure of the chip's architecure tells us that the chip itself can handle 'Just‑In‑Time' compilation of Java, and that several Java processes can be excecuted directly. You could almost say that Java is MAJC's machine code.
Each of MAJC's processors can support several Java Virtual Machines, and the performance will be such that two streams of MPEG‑2 (that's DVD) video can be decoded simultaneously. AC‑3 5.1 audio can be decoded using only 7 percent of the chip's capacity. Very impressive, expecially since Sun are hoping to ship their first MAJC chips next Spring. But, to me, the most important aspect of their announcement is that — for the first time — you'll be able to build devices that can decode audio and video from any format, in real time, because you can send the means to decode the format with the media itself.