Much effort is currently being invested in establishing new digital audio interface standards for the future — but shouldn't we also be thinking about bringing MIDI into the 21st century?
When I first got into making electronic music, about five years ago, MIDI seemed to offer all the answers. With multiple channels of real‑time control data, I could build a complete yet flexible arrangement, leaving the exacting business of recording until last.
Of course, I soon discovered that it wasn't that simple. Before long I was struggling with the problems familiar to many SOS readers — patch management, dodgy timing, and the rest. Over the years I've grown to know and love digital recording, but I still long for the control and spontaneity that MIDI first promised.
The main weakness of MIDI is timing. A millisecond's delay to transmit a note‑on message is negligible, but as more notes and controller data are added, accuracy inevitably starts to degrade. While 16 channels and polyphony to match is the norm on sound modules these days, few instruments are used to their full capacity because of this bottleneck at the MIDI In port.
Then there's control resolution. MIDI offers 128 possible values for any given controller, but this often results in zipper noise, or wasted DSP cycles used to smooth incoming data. To hear that first problem in action, just try assigning your mod wheel to control pitch bend. It sounds stepped because the pitch‑wheel data has twice the resolution, offering a signal that is 128 times smoother than that available from continuous controllers. Modern components are capable of handling much finer data values, but again MIDI limits how accurately you can adjust the sound.
The falling price of multitrack recording has led many people to just shrug their shoulders and record digital audio from the early stages of a project, enjoying near‑perfect timing and playback. But the more of a project that is stored as audio, the less spontaneous the composition process becomes. If you decide halfway through a project that you would rather use a different sound, or alter the groove, on a track that has already been recorded, everything has to stop while you record the new version. And while MIDI should make live electronic music a joy to perform, the reality is that few musicians feel comfortable about running the entire show that way, and only the foolhardy do so without a fallback plan.
The original MIDI standard has stood the test of time astonishingly well. Few other digital protocols have lasted 17 years or proved so adaptable to different uses. Ironically, its very flexibility is now holding it back — it has worked so well that nobody wants to be the one to propose changing it. But when the basic issues of speed and resolution prevent us from getting the most out of our gear, it's time for an overhaul.
FireWire is looking like the killer new technology that could reduce the mess of cables and patchbays in most studios to a single elegant optical fibre. But in all the excitement, nobody seems to have thought to upgrade the MIDI standard to take advantage of this. Improving the timing and data resolution of MIDI is trivial. Bringing the transmission up to speed will offer many new possibilities. Imagine synthesizers able to 'publish' the details of how many envelopes or LFOs they have, or even transmit those values as control data to modulate other gear. Wouldn't it be nice to morph between patches by telling your sequencer to just generate the control data? While we're at it, perhaps we could finally rid ourselves of the mess of bank and patch selection methods and standardise patch architectures a bit.
The alternative is a fracturing of standards. Already there are several competing plug‑in formats. Some offer tighter sequencer integration, some wider compatibility, others finer control. These splits in the market make life difficult for the consumer, and limit the potential sales for the developers. Meanwhile, the strengths of hardware gear are being obscured by the poor bandwidth and outdated conventions of MIDI. Many musicians are ditching hardware simply because it can't communicate fast enough with the computer.
In 1984 the designers of MIDI were whistling in the dark, trying to design a communications protocol that would work for any conceivable music application. They succeeded brilliantly, and sold tons of gear to boot. Now manufacturers should review everything that they have learned about MIDI, and give us a worthy successor. Otherwise, an outdated protocol is going to become a stumbling block in the studio — and the market.
If you'd like to air your views in this column, please send your ideas to: Sounding Off, Sound On Sound, Media House, Trafalgar Way, Bar Hill, Cambridge, CB3 8SQ, UK. Any comments on the contents of previous columns are also welcome, and should be sent to the Editor at the same address.
Eddy Robinson became obsessed by electronic music in 1996 and bought a keyboard one year later. Drawn to the technical end of synthesis, he abandoned a perfectly good career in computing to spend more time making weird sounds. He now has a much more interesting job in the pro audio industry.