Somewhere in the North‑East of England, electronic musician and designer Ron Berry has been applying an old‑style analogue modular synth to the creation of a very contemporary phenomenon — physical modelling synthesis. Jonathan Miller finds out how it's done.
As Martin Russ explained in his 'Model World' feature on the theory behind physical modelling synthesis, last month, the principles beind this type of synthesis needn't be applied only to digital synths. One forward‑thinking individual who's been using analogue modular synths to put modelling techniques into practice for many years is veteran UK electronic musician Ron Berry. (See SOS February 1996 for an interview). Ron leads a multi‑faceted life, dividing his time between various music recording activities, freelance designing for electronics companies, and converting country abodes into home recording studios.
While renowned for having always built his own electronic instrumentation and recording equipment, Ron has also been experimenting with acoustic modelling, using techniques gleaned from an article he read in a specialist computer magazine as far back as 1980. This admittedly heavy reading detailed pioneering research being carried out by Americans Kevin Karplus and Alex Strong, who took a computer register, loaded it with random numbers, then created a type of cyclical delay to reorganise the numbers. The result, apparently, was a very realistic string sound — pretty impressive stuff at a time when UK popular music circles were still awed at the simplistic analogue synth tones of Gary Numan and Orchestral Manoeuvres In The Dark.
Looking back, Ron claimed his first thought upon assessing this breakthrough was, "Hey, you could do that by using an analogue delay line and feeding pulsed noise into it." Spurred on by his initial success in wiring up a flanger unit with a built‑in delay line to a noise envelope generator, and looping it around itself until it was almost feeding back — "...a most impressive noise!" — he promptly set about adding the necessary modules to his modular synth to enable him to put his own new‑found theories into practice: "I found more articles by people who'd been analysing musical instruments to find out how they worked. Various American, French and Swiss institutions were also trying to make acoustic models, but they were all working on massive computer mainframe installations and it took them a long time to compile and write these programs. I thought I could make better use of my time by doing it in the old‑fashioned analogue way, because I already had the synthesizers. All I needed to do was add a few modules and all of a sudden I was in the wonderful world of acoustic modelling, although this eventually took 12 months to do successfully!"
Realising he was possibly onto a good thing, Ron looked into patenting his analogue acoustic modelling techniques, only to find this prohibitively expensive. Instead, in 1988, he published his findings in a paper entitled Experiments In Computer‑Controlled Acoustic Modelling (A Step Backwards?); the abstract describes a series of experiments in the electronic modelling of the acoustics of a variety of real instruments, and a truly affordable computer system used to construct them. Ron wrote: "This work arose out of the desire to create electronic instruments very cheaply and in a reasonable amount of time, but which, nevertheless, sounded like real instruments. As a musician working with, and tiring of, traditional synthesis methods, obtaining that peculiar quality of sound that only real instruments have was the goal, within limits set by time and expense."
Several years ago, I attended a seminar held at the now defunct Projects UK arts centre and recording complex in Newcastle‑upon‑Tyne, in which Ron ably demonstrated his findings, to a small but suitably captivated audience, by way of a series of taped acoustic modelling sound examples. I recall being particularly impressed by a strange, physically impossible 'instrument' producing the interesting effect of famed astrologer Patrick Moore's voice exciting the resonant modes of a bell model! So, to paraphrase another popular television show, how does he do that?
For the majority of us, this is mind‑boggling subject matter. I was therefore delighted when Ron invited me to his spacious countryside studio for a spot of hands‑on acoustic modelling — of the more basic variety, of course! For the purpose of this introductory lesson, Ron chose to guide me through the creation of two simple acoustic models of real instruments — a trumpet, then the slightly more complex saxophone. First, though, let's explore a few basic principles. Over to Ron...
"Acoustic modelling goes back to the days when people were experimenting with echoes. With an echo, if you stand 50 feet away from a high wall and clap your hands, you'll first hear your own clap. The sound then heads for the wall at 1,100 feet per second, where it comes across a different medium — something solid instead of air — reflects 50 feet back again and you hear the echo, roughly a tenth of a second later.
"In the '50s and '60s electronic designers started looking at what an echo did, and produced electronic devices that allowed you to send sound into something and get it back again a tenth of a second later, or whatever. One such device was the tape echo, where sound is converted into an electrical signal, put onto tape, then taken off the tape a little later, converted back into an electrical signal, then back into sound.
"Later effects such as phasing and flanging are really about modelling air movements, so when you're discussing the modelling of musical instruments, it must be remembered that you're also modelling acoustic phenomena. That's really the methodology of acoustic modelling: you look at how something functions acoustically, and try to look for ways of simulating it electronically. If you're working with pure electronics, then you're working with voltages, trying to find ways of getting electrical signals to go around a circuit in a similar way to a sound propagating around an instrument. If you're using a computer, you're looking at electrical signals converted to numbers, and the success of the model is related to the way in which the numbers flow through the algorithm you're using."
"Obviously, there are degrees of accuracy in acoustic modelling. By way of an analogy, you could have a crude plastic model roughly in the shape of a steam engine. This model is recognisable as a steam engine, yet you could have another model that is absolutely perfect in every detail, but with no moving parts. Or you could have a full‑scale working model that pulls kids around a park. They're all steam engines, but different sorts of models.
"Acoustic modelling with electronics in a modular synthesizer limits the number of devices that are available to you — once you run out of modules, that's it. All the models that I do are really basic, but that doesn't bother me because I'm not interested in producing a model that's perfectly accurate in every detail. By using computers and working with algorithms you have access to a more elaborate set of functions and create a more detailed model, but the principles are the same. With computing the limit is processing power.
"When acoustically modelling real instruments, there's a difficulty in that you have to decide how the instrument is going to be played. Playing a modelled trumpet on a keyboard is a lot different to playing the real thing using your mouth, and it's the same with a saxophone. The saxophone is an extremely expressive instrument and this expression is directly related to how the performer plays it. Breath control is a very sensitive and subtle achievement. Again, by playing a model of a saxophone on a keyboard, you've simply got a keyboard that sounds like a saxphone, but you haven't got the expression of the real thing. To get this expression, you really have to get the model to behave like a saxophone, with some kind of breath control and a set of valves so you can pick out all the possible notes and harmonics.
"In the case of a trumpet, the valves make it possible to reach notes that aren't in the natural horn overtone series. To model a trumpet really accurately you should theoretically have an instrument that's got push buttons, to represent the valves, and a breath controller — but then you're not that far away from a real trumpet, so why bother, since it would be just as hard to play!"
"Before moving on to the specific modelling of these instruments, we need to briefly look at the acoustic working mechanics of a pipe. After all, both the horn and saxophone are essentially musical 'pipes'. If you analyse a musical horn at its most basic level, you've got a pipe and somebody putting their lips around one end — obviously with some kind of mouthpiece to make it easier to blow — and a flare at the end. But we'll go into that later.
"Let's see at how this works: the first thing you've got [see Figure 1] is the pipe, with lips at one end. When the lips are closed the pipe is blocked, but when you make a sound with the lips, this sound has to travel down the pipe at 1,100 feet per second, or one foot per millisecond. If we say the pipe's four feet long, this first sound is going to arrive at the end of the pipe after four milliseconds. Since this is a fairly high‑pressure situation, the sound emerges into the air at the end of the pipe and suddenly expands, creating a big disturbance. The sound carries on through the air and into the ear, but because of this sudden transient drop in pressure as the sound spreads out into the air, some of the pressure drop travels back down inside the pipe and hits the lips again eight milliseconds later — like the hand‑clap hitting the wall, as described earlier. However, in the case of the pipe, the sound then reverberates back out of the end again, so you eventually get a reverberation effect, with a series of echoes eight milliseconds apart but decaying with each echo as the sound loses energy.
"The next thing you have to look at is what is happening with the lips. If the person is vibrating their lips at a certain frequency to produce a note, the lips could be closed when this first signal gets back to them, as above. Alternatively, they could be open, to let some more air through, so this time the sound reflection could be different as it meets a partially blocked pipe. The reflection is governed very strongly by whatever conditions it encounters: if the lips are fully open and there is air coming out, you get the force of two air pressures against one another — they might cancel; they might not. If the lips are only partially open, because the air has dispersed and the lips are collapsing, or the air pressure is building up and forcing the lips apart, then there's going to be a non‑linear reflection. The amplitude of the reflection is being changed. It's being heavily modulated by the action of the lips. When the rate of vibration of the lips roughly matches the fundamental of the pipe, this is when the horn player produces a constant note."
"We should now have some understanding of how it's working: sound travels down a pipe and each time it comes back it meets a different condition. The lips are like a valve opening and closing very rapidly, letting air through. So if you wanted to make a model of that, first you'd have to ask yourself whether you were working with a computer program or with electronics. Electronically, I need something that I can send a signal into and get a signal out of some time later, which is a standard electronic device — a delay line.
"As previously stated, a four‑foot pipe would require an 8ms delay. Likewise, we've already established that the lips are vibrating like a valve that is, in turn, controlling a reflection. In the electronic world [as depicted in Figure 2] this is achieved with a voltage controlled amplifier (VCA). A VCA is also just like a valve, with input, output and control stages. When there's no control voltage nothing gets through, and, conversely, when the voltage is set to maximum everything gets through — what goes in comes out. If you can rapidly control that, you can control the frequency, as in the pipe. By connecting the output of the VCA to the input of the delay line, and the output of the delay line back into itself and back to the VCA you, in effect, create the pipe using a delay, with the VCA acting as a valve or 'lips'.
"Next, we need something to vibrate the lips — some kind of oscillator. Through stroboscopic experiments with a glass pipe, researchers have found that lips actually vibrate in a sine wave pattern, which is easily achievable with an oscillator that is connected to the VCA control input — the sine wave minimum shutting the VCA off and the maximum opening it wide. However, the sine wave is not only controlling the reflection down the pipe, but also providing the 'air' that is coming in, so we also feed the sine wave in via a mixer, so it's controlling itself [a look at Figure 2 will help you to understand this more easily]. And that's an electronic model of our musical pipe, at its absolute basic level.
"All that's needed to complete a basic horn model is something to turn the sine wave source on and off at the start and end of the note; an envelope generator and a second VCA can do this. Also, some way of controlling the sine wave frequency and the horn length (delay time) loosely together is needed to play different notes. As these two devices [the sine wave oscillator and the delay line] are voltage controlled, connection to a voltage source such as a keyboard can do this.
"Playing with settings is the key to getting a wide variety of horn sounds, and even substituting the sine wave can produce interesting horn effects on source sounds."
"The reason why a trumpet sounds so nice is that you've got this wonderful thing going on where a sound is evolving by going up a pipe and then being modified by itself, creating overtones and distortions. These overtones are also being modified and reflected up and down the instrument, producing even more overtones, each one different to the one before. So you've got this beautiful overlapping system where, as the sound is dying away, it's producing more and more overtones each time it's reflected by the lips mechanism. This is why a trumpet sounds as it does. If you think about it, it's very hard to reproduce that sound in any way other than by making a model of it. I think that's why many electronic instruments' trumpet patches don't sound very realistic, because they're not taking into account that continually evolving harmonic mechanism.
"A number of additional modules are required to electronically model a trumpet [see Figure 3]: firstly, as I already mentioned, you can't have a trumpet unless you can play notes and stop notes, so there has to be a way of turning the sine wave on and off. To do this we need another VCA. We also need something, such as a keyboard, to play the sound. Something to give expression would also be handy, so here an extra couple of modules are used simply to produce vibrato. We could also add an extra length of 'pipe' so that the same notes can take longer to build up.
"Trumpets have flares on the end, so we also need to know how a flare works. On a computer you could add another 50 lines of code, or whatever, to simulate this, but in electronics we need extra modules — or do we? From a listener's point of view, one of the things the flare on a real trumpet does is project sound forward. If it simply came out of a straight pipe, the sound would go all over the place, but since it's a horn shape, it tends to go forward. You get an increase in the level of the sound at certain frequencies that the horn can control. When the frequency is very low, the horn really has no effect, because the horn is so small and the wavelengths are so big. One thing you can do is use a bit of EQ to make it sound more raspy.
"The delay line I use for these models includes a simple low‑pass filter, like a tone control. It's also got a feedback control, for convenience. I call these Voltage Controlled Resonators, or VCRs. They control the way in which high frequencies propagate through the circuit. The overtones are being generated by the VCA and mixer, and then filtered off, so the high‑end ones decay faster than the lower ones. This has a mellowing effect — like having an instrument with a wide‑bore pipe."
"Moving on to the saxophone, the same kind of considerations apply but the action is slightly different. Very simply, a saxophone has a reed at the end of a pipe, so first you must create a patch to represent the pipe — we can use the same one as in the trumpet. The thing that's different is that the reed is a vibrating membrane with its own mass and resonance. This time it's the reed trying to control things, in sympathy with a variable‑length tuned pipe, rather than a trumpet player controlling and vibrating his or her lips to target the required pipe harmonic. The reed is a valve, as explained in the trumpet example, so the delay and the valve work in something like the same way. The main difference is that in the saxophone model the valve is driven by the reed and the pipe working closely together into a self‑oscillating system, with the lips almost passive, except for articulating the note. In the real instrument, the reed is the valve, of course, and the energy from the air pressure flowing from the mouth sustains the note.
"To play a note, the gain through VCA2 [see Figure 4] is raised until the whole system oscillates when a key is pressed. The limiter shown in the diagram works electrically like the elastic limit of a real reed and stops the whole thing from going into overload. The first voltage‑controlled filter (VCF1) has a resonant peak that can be adjusted and tuned to make the 'reed' more live or dead. A live reed picks up overtones or overblowing more readily.
"Again, to make the saxophone model a bit more realistic, I use extra modules to add playability and bring more life into the sound. The keyboard triggers an envelope generator that provides the signal that turns VCA2 on or off. This is modified en route by another VCA that is set to add a little tremolo from a low‑frequency oscillator (LFO) to the system gain. No human can, nor would want to, play a perfectly static note, so the tremolo helps to keep the sound 'live'.
"The other thing you need to bear in mind is that there is air turbulence around the reed — how much depends on how it's played. Players use this to create a sleazy jazz‑sounding saxophone. I personally love that sound, so I've included some extra modules to try and simulate aspects of this — a noise source, with another VCA to turn the noise on and off, and another VCF set to match the kind of turbulence you might expect to hear around a reed.
"Since the saxophone is a resonating, self‑oscillating system, there needs to be something that will limit the amplitude of the vibrations. A real reed does this when it reaches the limit of its travel one way or the other. In the model, a limiter is needed to prevent system overload clipping. For convenience in the model, a simple soft‑amplitude limiter is included in the delay line module — crude, but it works. With a real trumpet it's more to do with how loud you can blow it before your lips hurt! In the trumpet model, amplitude is mostly controlled by the sine wave level, which is actually quite low."
It should be remembered that the block diagrams featured in this article are really just the simplest building blocks of acoustic modelling, the key to which is experimentation. As Ron says, "I've continued to experiment with patches, but I didn't get into acoustic modelling just to make an instrument that allowed me to sound like a saxophone or piano. What interests me is that you can take the models apart, rearrange and feed things into them. You can make models that can't really exist in reality, but are possible in the electronic world.
"When you make an acoustic model of something — say a saxophone — the fact remains that even if you have a realistic‑sounding saxophone on a keyboard, you simply end up finding out what a terrible sax player you are. Playing a bona fide saxophone with a reed in your mouth is a totally different ball game to playing one on a keyboard. You have to work really hard just to make it sound like a third‑rate sax player, so you won't find me using those kind of timbres in my own music that much. You're more likely to find me trying out models that go through from trumpet to sax, say, or feeding a sax sound into a trumpet model. Or how about a plucked trumpet?
"It's for these reasons that I feel I'm treading a different path to people like Yamaha who naturally are making commercially orientated acoustic modelling synthesizers because they want to sell lots of keyboards."
Computer software provides another way of getting into modelling, using the same kind of building blocks as the analogue modular approach, and is capable of producing some interesting results for relatively little outlay, providing you already own the necessary computer platform. The only source of commercial acoustic modelling software I know of is IRCAM (the Institute for Research and Co‑ordination in Acoustics and Music, in Paris — check out Paul Tingen's revealing tour of the facility in SOS December 1996). Software is obtained by subscription to their Forum user group. Subscribing to the 'Analysis/ Synthesis' section for a private individual costs around 1700 Francs per year, for which you get a wealth of Apple Macintosh‑compatible software, plus updates. The modelling software section is called Modalys, and allows the creation of virtual instruments from scratch, using parameters that relate to the physical sound and sound‑production mechanisms of real instruments, but which are not constrained by the limits of the real world.
Ron: "The models developed by institutions up to now have been scientifically based and very mathematically correct — reams of formulas on waveguide theory and much more in‑depth study of reed behaviour; complex formulas I can barely understand, let alone manage on the modelling synth. The horn models I've heard are very complex, taking into account things like valve position — the ones you press — along the tube reed/housing shape. Some sound very good, but take an immense amount of computing power and some I've seen you still play by typing! Perhaps the Modalys software will allow you to play around with things as easily as my modelling synth can. I have a feeling that Opcode or Digidesign, both of whom work closely with IRCAM, will produce something more commercial before too long."
Those with an overwhelming desire to put Ron's findings into practice for themselves may be interested in the fact that Gateshead‑based Digital Audio & Computer Systems (DACS) Limited — for whom Ron often freelances — are developing a commercially viable modular synthesizer with acoustic modelling capabilities, based upon Ron's unique instrument as referred to in this article. They are not envisaging mass production, but instead plan to offer custom‑built systems in a small way for whoever might want them, much along the same lines as America's Serge and Germany's Doepfer standard modular synthesizers. Models of such things as plucked strings, drums, gongs, bells, wind and brass, plus unusual hybrids, imaginary instruments and, of course, really weird effects, can be achieved with the type of system Ron has designed.
After I saw Ron Berry's techniques in action, I was inspired to try to replicate some of his experiments, in the hope of getting a few interesting new sounds for my sampler. After a few abortive attempts using a digital delay, I hit on the idea of using the effects section of my Korg Wavestation. After a bit of fiddling, I came up with the technique detailed below. Before going any further, I'd better make something clear: this technique is purely for sample fodder — you're not going to turn your Wavestation into a Prophecy, more's the pity. Compared to Ron's modelling synthesiser, it's a bit primitive, too!
Although I used a Wavestation for my experiments, I'm sure any modern Korg synth with two effect processors could replicate or even better this sound with a bit of fiddling. It's a kind of hybrid of flute, clarinet and didgeridoo, using a pair of simple waveforms as a driver, the stereo parametric equaliser as a kind of 'pre shaper' and the stereo delay as a resonator.
1. First you need to set up a waveform to drive the effects. I used wave 138, 'Ch', and wave 427, 'Cello'. Combine them with wave 427 at a lower level and you should hear an unimpressive quacking sound.
2. Now assign the stereo parametric EQ to FX1 and enter the following parameters:
- High freq: 20.0Hz
- High level: +3dB
- Mid freq: 100
- Mid width: 59
- Mid level: +12dB
- Low level: 0dB
3. Next, assign the stereo delay to FX2 and enter the following:
- Dry/wet mix: WET
- Delay time: see below
- L/R delay factor: 1:1
- Feedback: 96
4. Set the delay time to the shortest value (1ms), and play the keys in turn until you hear a strong resonant note — in this case, B5, the second highest on the keyboard. You should hear a nice flutish sound with a distinct chiff at the beginning. If you then add vibrato to the driver waveform, you'll notice that by the time it gets through the resonant stage, it's gained an unusual and very natural sound which can't easily be replicated by adding it in the sampler.
5. Now increment the delay time, finding the delay time that produces the right resonance for each note. Here's what I got from the top down:
- 1ms B5
- 2ms B4
- 3ms E4
- 4ms B3
- 6ms E3
- 8ms E2
- 9ms A2
By giving the L/R delay factor different settings, you can get a lot more resonant notes, and I've also noticed that a few notes have more than one good‑sounding delay time, so using the dual delay could produce more complex timbres. Changing the EQ settings alters the character of the sound considerably — perhaps there's scope for some sample crossfading there. It's also interesting to drive the resonator with a noise pulse, which yields an interesting harsh pluck sound. Try changing the polarity of the delay feedback too, as this generates different harmonics.
If any readers come up with a good combination of waves and resonators, perhaps they could share them with the rest of us. Join the society of stone‑age sound modellers! Norman Fay