Having established the basic principle of analogue (or 'subtractive') synthesis in the first part of this series, back in June's issue -- ie. start with a sound containing more than you need (a waveform which contains lots of harmonics) and whittle it down (using a filter to remove the unwanted harmonics) -- we can now come on to ways of refining this process and automating it. If you have been trying the manual filter frequency manipulations I suggested at the end of the first piece, you will have noticed that small movements of the filter cutoff are not that noticeable, and that to get a marked effect you need to sweep the filter over a sizeable portion of its range. Although later on in this installment we will look at ways to do this automatically, without spraining your wrist every time you move the knob quickly, it is sometimes more appropriate to accentuate a small filter movement than to make the movement itself bigger.
This is done by amplifying the frequencies around the cutoff point. This means that instead of having to detect the filter's position by noticing what is not there, we can actually hear more of the frequencies around the cutoff point because their presence is exaggerated. There are perhaps more synonyms for this feature of analogue synthesis than any other, and this can make it difficult for beginners. If the terminology for this parameter on the front panels of two synths is different, how are you supposed to know they both do the same thing? The most self-explanatory of the terms used is Emphasis, which probably explains why it is the least common. All too often, manufacturers try to mystify the processes they use, so more scientific terms, like Resonance and Q, are much more common. But whether the control is labelled Emphasis, Resonance, or Q, it does the same thing. At the point where the filter cutoff slope begins, there is a very narrow band in which the frequencies are actually boosted. The higher this control level is set, the more the frequencies at the cutoff point are amplified. When the filter is static (ie. the cutoff point is not moving), the effect can sometimes be difficult to spot, possibly because there are few frequencies in the filtered waveform around the cutoff point. Sometimes, when you turn the resonance up on a static filter you hear it quite clearly (because there are frequencies around the cutoff point and they are being boosted), other times not. But the surest way to hear the effect of resonance on a filter is to sweep it, even by a small amount. If you have access to a filter with resonance, select a sawtooth wave (or some other harmonically-rich source if you don't have analogue waveforms available) and try adjusting the resonance on a static filter setting first. If you don't immediately hear certain frequencies being picked out, just move the cutoff a little bit. Then do the same with the resonance set to zero. The difference will be very clear. As the filter with resonance is moved, the individual harmonic components in the source waveform(s) will be picked out one by one. This, for me, is another one of the great joys of analogue synthesis. Quite often, the sonic interest created by this slow sweep through the frequencies on a single note is worth a thousand played notes with unvarying harmonic content, especially if you sweep in a low register, where all the associated harmonics are within the audible range. The most common use of resonance is with low-pass filters, but on synths with high-pass and even band-pass filters (see June's instalment for more on these), you usually find that the resonance control is still available, and sometimes it can be very effective when used with such filters, especially for creating 'vocal'-type movement in a sound (see the 'Vowel Play' box).
Of course, resonance has many other uses. You can use it whenever you want a sound to catch the ear in a busy mix, where it has to fight its way past a lot of other attention-grabbing sounds. It is also useful to alert the ear to the presence of basslines, when you know (or suspect) that the music is going to be heard on systems that cannot accurately reproduce the bass end (AM radio, older TV sets, and so on). A bit of resonance will bring out the higher harmonics which are in the bandwidth of the playback system, and listeners' ears will extrapolate to the fundamental and 'fill in' the missing frequencies.
On some analogue synths, if you turn the resonance right up, the filter starts to howl in a way that is very similar to guitar feedback. This is known as 'going into oscillation' and happens because the resonance is up so high that a clearly distinguishable frequency is created, with the harmonic characteristics of a sine wave (ie. very little else except the fundamental pitch). Sadly, some analogue manufacturers and many of those currently producing PCM-based synths felt/feel that you need to be protected from this extreme effect, so you may find that you can't get this to happen on your synth. If you can, try using full resonance with the audio oscillators set to generate white noise (if available on your synth). This is the extreme example of subtractive synthesis I referred to in the first part of this series, where you start with all frequencies present, but hack most of them away, until you are left with just a raw oscillation of a very narrow band, amplified to screaming level. You can then use the filter frequency as a sort of very rough pitch control. While it is unlikely that you will find a use for this technique in a sensitive ballad, sometimes it is just the thing for the climax to a full-frontal sonic assault. This technique will really make ears bleed, and also offers the synthesist one of the few ways with which to fight a guitarist stuck in front of a Marshall stack with all six strings feeding back (you can hear Brian Eno making excellent use of the technique on Roxy Music's early, well, music). I've not heard self-oscillation being used in techno yet, but I'm sure it would fit right in with that 'machinery on overload' vibe.
In the course of discussing the effect of resonance, we've seen that it brings out movement in the filter cutoff. So far, we have assumed that this movement will be induced manually by the performer... and so it often is. For me, the difference between a great player and a greater synthesist is that the latter often does more with the parameter knobs during a solo than with the keyboard. Listen to Larry Fast with Peter Gabriel or the aforementioned Brian Eno on early Roxy Music albums and you won't hear a bewildering flurry of notes, but complex changes in timbre which are far more interesting than 'chops'. However, there are many filter movements which are too fast to be produced for every note played. Wouldn't it be nice if there were a way to automate these filter movements, leaving both hands free to play the keyboard? Well, the good news is that there are several. We already saw one of them in last month's instalment; the Low Frequency Oscillator, or LFO, which can be used to induce regular repeated variations in the sound. The first applications we saw were in using the LFO to control pitch, adding vibrato or volume for a tremolo effect. By routing the LFO to the filter cutoff frequency (more on the concept of routing in a minute), you can constantly vary the harmonic content of the sound, an effect which is particularly pleasing at very slow LFO frequencies. If you then also increase the resonance, the harmonics will be emphasised in turn as the cutoff sweeps back and forth.
There is another way to vary the cutoff, which is based not on repeated effects, but happens automatically each time you trigger a note. This means you can set up the same shape of filter movement for each note, even when you are playing very quickly or polyphonically. This sound-shaper is not only the lynch-pin of analogue synthesis, but a mainstay of all other types of synthesis as well, and is called an Envelope. It allows us to automatically shape sound over time, beginning from the start of each new note. By taking care of the changes we require on every note we play, it leaves us free to worry about what we are playing. I've introduced the concept here by explaining how envelopes can alter filter cutoff over time, but they may also be used to control any other aspect of the sound which we want to affect each note played, such as the volume level or pitch. This is what makes the envelope such a universally useful synthesis tool, not just for analogue filtering, but for overall volume (which we need to control in any type of synthesis). The envelope is also important in other synthesis methods, for controlling frequency modulation (or FM) amount or the level of different harmonic groups in FM or additive synthesis respectively (more on FM and additive synthesis next month).
The most common type of envelope in traditional analogue synthesizers is called the ADSR. This is an abbreviation for the four stages the envelope can pass through, namely Attack, Decay, Sustain & Release. While these are not universally implemented by any means (on cheaper machines you may find only Attack, Decay and Release, and on more recent synths there may be additional parameters available), the ADSR is the most common type, and a good place to start understanding the idea behind envelopes. Three of the four standard envelope parameters refer to the times taken to move between specific levels (Attack, Decay and Release). The third parameter, Sustain, is different, as this sets the level at which the envelope remains until the key is released.
Attack is the time taken for the envelope to move from the initial zero level to the maximum level. The higher this parameter is set, the longer it takes to reach that maximum level; so if the Attack Time is at zero, the full level should be achieved instantly (in fact, it does take a small amount of time to reach full level, and this time varies from synth to synth; this variation in the minimum attack time is what can make one synth sound punchier than another). The Decay parameter sets how long it takes for the envelope level to drop from the maximum to the variable Sustain level. If this Sustain level is set to maximum, the Decay parameter has no effect, and if the Sustain level is zero, the level will drop to zero at the rate set by the Decay if the key is held long enough. Setting the Sustain level to maximum means that once the attack portion of the envelope has happened, there will be no change in the sound until the key is released. The lower the sustain is set, the more the level is allowed to decay while the note is still held. Once you have let go of the key, the Release parameter governs how quickly the level drops to zero from that set by the Sustain value. If this is set to a short time, then the level will drop very quickly.
It is fairly easy to understand how these levels work if you imagine the envelope being assigned to control the overall volume of a sound. A slow Attack will fade the sound in instead of it appearing instantly, a fast Decay will make it die to the Sustain level more quickly, a high Sustain level will keep the sound at high volume until the key is released, and a long Release means the sound will take a while to die away once you have let go of the key. All analogue synths will have a volume envelope (as will 99% of all other synthesizers) so you can very quickly acquaint yourself with the effect of these controls on the volume by adjusting the parameters and seeing how they affect the sound. Of course, if you don't have all four parameters, just learn the effect of those you do have. Those of you using synths with more complex envelopes will have to wait until later in the series to fully understand how they work, when we will look at those synthesis styles which use more stages.
Of course, envelopes may be applied to the filter as well as the volume of a sound (this is where we came in), and this is important when creating sounds that appear 'natural' to our ears. In acoustic instruments, the harmonic content of the sounds generated often changes radically over time, as well as just the volume: a plucked string starts off very bright, but quickly dies away to just the fundamental. Even bowed or blown instruments, which can maintain a steady harmonic content over time, tend to have a harmonically brighter attack as the player accentuates the beginning of the new note. Even if you're not seeking to directly copy acoustic sounds (I've already mentioned what a non-starter this is with most analogue synths), the ear still likes to hear familiar patterns in sounds. However, when it comes to applying an envelope to the filter cutoff, things get a little bit more complicated. A volume envelope will always start from silence and return to it (otherwise the synth would be sounding even when you hadn't played anything), but this is not necessarily the case with the filter envelope. The filter cutoff may not start from completely closed, nor may it be returned to that position. In fact, most of the time the volume envelope is used to silence the sound long before the filter envelope might achieve the same result.
However, in certain cases, you may want to use the filter envelope to remove all frequencies. In this case you would use the manual filter control to close the filter completely, and then set the envelope to open it and return it to the closed position at the end of the Release phase of the envelope. Remember to make sure the the release on the volume envelope goes on long enough to let you hear the effect of the filter envelope. It is also best if you set the volume attack to minimum and the volume sustain to maximum. In general, you should use the manual filter cutoff to set the start and end position of the filter. Remember that if the manual filter cutoff is set to fully open the filter, there is no way the envelope can affect the filter any further (unless you have one of the more flexible synths which allow for negative settings of the filter envelope). So make sure that the filter is at least partially closed before you start trying to hear the effect of the filter envelope. You will also need to set the amount of effect that the filter envelope has on the cutoff position (look for the parameter on your synth labelled Filter Env Amount, or perhaps just Filter Amount). If this is set to zero, you might spend all day adjusting the filter envelope parameters without hearing any difference! The Filter Amount control determines how much movement the envelope will induce in the cutoff frequency. If you set a large amount, the filter will probably be fully open at the end of the Attack phase of the envelope, and lesser amounts will cause it to open up less.
To imitate the natural harmonic decay heard in 'plucked' acoustic sounds, you should set the attack of the filter envelope to zero, so that when you play a note, the filter will open up fully straight away. If you use a slower attack, the note will sound more like a instrument being bowed or blown softly to start with and then increasingly harder. Again, these are just examples from the acoustic world to help you understand what you are doing, not attempts to make exact copies of 'real' sounds. The great thing about analogue synthesis is that you can create lots of sounds which don't exist naturally, and if you have access to more comprehensive analogue synths, you should also experiment with envelope control of band-pass and/or high-pass filtering. Similarly, if it is possible to set a negative envelope amount to the filter on your synth, check out the effect that this gives. In this case, you should set the manual cutoff to the most open position that you want it to be, as the negative envelope will close the filter to start with, and then return it to the most open position at the end of its cycle.
It is always a good idea when experimenting like this to work with fairly long attack, decay and release times, with the sustain level at about half way. This gives the untrained ear more time to follow what is happening to the sound during each phase of the envelope. When you feel comfortable with the slow movements, reduce the times so that the cycle happens more quickly. Once you have heard a filter opening slowly and then sped it up bit by bit, you will soon recognize the characteristic sweep, however fast it is happening in a sound -- if you have trouble, you can always turn up the resonance, which will help pick out the filter movements.
Of course, envelopes can be used to control much more than volume and filter cutoff, but how much you can experiment with this will be determined by how much routing you can do on your synth. The most basic analogue synths will be hard-wired to the sort of signal path shown in Figure 3. Usually two oscillators (sometimes one, sometimes three) are mixed together and passed through a filter -- also known as a VCF or DCF (Voltage Controlled or Digitally Controlled Filter), and then the volume amplifier or VCA/DCA (Voltage Controlled or Digitally Controlled Amplifier). Normally the filter and amplifier will each be controlled by an envelope (on some more basic synths you may have to share one envelope between volume and filter) and you will often find that your envelope(s) cannot be set to control anything else. A single LFO will probably be available to control the pitch of both oscillators (vibrato), the pulse width of one or both (PWM), or the filter cutoff. If you find yourself able to do more than this, then your synth is definitely above average. Additional routing possibilities include envelope to pitch (for automatic bend effects), pulse width and LFO amount (to delay vibrato till after the note has been held for a second, and so on), and switching a third oscillator between normal audio and LFO operation. On some synths (such as the EDP Wasp and OSCar) you may even find that you can switch the envelope to repeat its cycle, allowing for the creation of custom LFO waveforms using the ADSR shape.
At the opposite end of the scale, you may have access to modular analogue synthesizers whose routing possibilities are completely up to you; with these, you use patch cords to connect the different parts of the sound-generation and -shaping architecture together in any order you like. The degree of complexity is directly proportional to the number of patching points in the system (and the number of patch cables you have -- a steadily decreasing number in my experience!). On big modular systems, not only are the routing possibilities infinite (even discounting those which do not produce an audible result), but the actual number of oscillator, filter and envelope modules is variable (assuming you have the money -- so if you want another oscillator, you go out and get another oscillator module), and you can build up ridiculously complicated routings. There comes a point where the law of diminishing returns is clearly applicable, but unless you are very experienced, long before this point you will lose all grasp of what is actually happening to the sound in your mega-patch. A good compromise between the fixed architecture of the basic analogue synth and the totally open system of gigantic modular systems is something like the Korg MS20, which has enough patching points to be flexible, but not so many as to be unmanageable or incomprehensible. This was perhaps the most successful of the 'patchable' analogue machines (even though the single-oscillator MS10 was much cheaper). As a result, there are a decent number of these machines floating around out there (whilst house-hunting in Carshalton recently, I spotted one left behind by a teenage son when deserting the parental abode) although their price on the second-hand market has risen drastically of late because of the renewed interest in all things analogue. However, once you have mastered the fixed routing of the simpler analogue synths, such 'patchable but simple' analogues are ideal for learning the more advanced applications of analogue synthesis -- if you can track one down.
So, when the routing of the analogue signals is left up to you, what are you going to do with your new-found freedom? Well, as we so often discover when all constraints are removed, many of the possibilities opened up actually lead nowhere at all or, to be more literal in this case, result in silence. So you should actually start by recreating the signal path shown in Figure 3; one, two or three oscillators routed into the mixer, with the result put first through a low-pass filter and then amplifier, with an ADSR envelope each controlling the filter's cutoff point and the amplifier's level respectively. This advice is not so conservative as it sounds; it's not so much 'don't try this at home, children' as 'It pays to learn the rules before you break them!'. LFOs can be routed initially to oscillator pitch and pulse width (if pulse wave is selected on one or more of the oscillators, that is), or filter cutoff and amplifier level (for wah-wah and tremolo-type effects). Then try moving one connection at a time and see the way the sound changes; start with the points to which LFOs and envelopes are routed, as these are much less likely to make the sound disappear altogether.
Don't think it's the end of the world if you don't have an analogue synth with physical patching facilities, either. Although Sequential Circuits never went as far as offering patching cables, the Poly-Mod sections on everything from their Pro One up to the Prophet T8 give you some pretty wild routing capabilities which allow you to get away from the standard analogue setup, and most modern synthesizers have pretty flexible internal routing capabilities now that such things can be done in software. So even if your PCM-based synth doesn't have the most authentic analogue oscillator sounds, it can still teach you a great deal about the way routing works. Particularly good examples of very flexible routings are Emu's rack units, from the Proteus onwards. The only real problem with software routing is that you may have to become familiar with a lot of abbreviations, as sometimes there is not enough room in digital displays to list out the parameters and their settings fully. So be prepared to decipher combinations of numbers and letters like OSC1 PWM ENV or FIL TYP: BPF in the display. Whatever access you can get to more flexible routing synths, whether via patch cables or software switching, don't be afraid to experiment with bizarre routings. The more advanced techniques discussed below both evolved from people plugging things in where they weren't supposed to go! Who knows, maybe you will be the first to discover a new routing technique which will be as full of character as these two.
The first of these, Ring Modulation, is a process for modulating one frequency with another in such a way as to produce only sum and difference frequencies, but none of the original fundamental. The original ring modulation circuit has its origins in radio communications, and was originally based around a couple of transformers and a diode bridge or ring (hence the name). Subjectively similar effects can be created by routing an oscillator operating at an audible frequency into the LFO input of another audio oscillator, which is possibly how the effect was first discovered. This would probably have originally have been done on a modular system, but it is also possible on the classic MiniMoog/MemoryMoog design, which allows oscillator 3 to be switched between audio and LFO function. By switching to the LFO routing, but keeping the frequency in the audio range, you can modulate the pitch of the other oscillator so fast that you produce new frequencies which are multiples of the two source oscillator frequencies, many of which are not in the normal harmonic series of either oscillator's fundamental frequency. This produces a range of sounds with a metallic quality, and is therefore useful for making bell sounds or more abstact timbres. Whether the sound has a slight metallic edge to it or is completely atonal depends on whether the frequencies of the two oscillators are closely related or not, as well as whether the pitch of one is being moved in real time as you play it (by an envelope or LFO, for example). As very small adjustments to a ring-modulated oscillator's frequency can make a major difference to the timbre produced, you will find the results can be unpredictable but very rewarding.
Another technique which produces major changes in the harmonic content of the sound, but is less radical in terms of those harmonics' mathematical relationship to the fundamental, is oscillator sync. In this specific configuration, one oscillator's cycle is synchronised to that of a second. This forces the waveform of the sync'ed oscillator to restart its cycle each time the other one crosses the zero point going from negative to positive. As a result, the fundamental frequency of the slave oscillator is kept the same, but the waveform is radically changed. The pitch of the controlling oscillator is not normally added into the audio mix, but instead can be shifted by pitch-bend, envelope, aftertouch or LFO. This makes radical changes to the harmonic content of the synchronised oscillator, but without making the fundamental pitch as weak as ring modulation does; instead, the higher harmonics around the pitch of the moving oscillator are picked out. Oscillator sync is ideal on lead synth sounds, where it can make the synth scream like a distorted lead guitar, or on bass sounds, where it makes the bassline stand out with a really hard edge. Oscillator sync is to be found on many analogue synths, from the classic Prophets and Moogs to the more recent Novation BassStation Rack and Yamaha AN1x. It is another one of my favourite features on analogue synths, giving unparalleled expression to the sound when the pitch of the controlling oscillator is linked to aftertouch or one of the mod wheels. However, like ring modulation, oscillator sync is not, strictly speaking, a 'subtractive' technique, in that it adds to the frequencies originally present in the oscillator waveforms (although you shouldn't let that stop you making good use of it!). As such, these techniques make a good bridge from 'straight' subtractive techniques to other calculation-based styles of synthesis, which use multiplication and waveform manipulation to produce frequencies outside of the normal harmonic series, such as Frequency Modulation and Phase Distortion. In the next part of this series, we will look at the most successful of these 'multiplication' synthesis types, Frequency Modulation, or FM.
Apologies go to readers for the mistake which crept into the diagrams illustrating Paul Wiffen's first article in the Synth School series, and our thanks to those four observant readers who contacted us to point it out. The fundamental frequency in any waveform is of course the same as the first harmonic in the harmonic series, and should not have been illustrated as two separate components. The correct harmonic series for the sine, sawtooth, square and pulse waveforms are displayed left.
Although many people swear by the original analogue synths, some of which are now changing hands for more than their original retail prices, a new generation of synths is recreating the analogue sound via the state-of-the-art technique of physical modelling. Using raw processing power, DSP chips (first used for effects processing) are now being using to simulate the exact stages of the sound modification procedure which occur in analogue synthesizers, from oscillator waveforms to filter action to envelope shaping, all entirely in the digital domain. The principle advantages of these modern recreations are that they boast rock-solid tuning (never original analogue's strong point), hundreds of presets and user programs, and all the advantages of MIDI for sequencing and SysEx communication. Korg's Prophecy did not restrict itself to just analogue sounds but the analogue models it did feature were extremely reminiscent of the classic monosynths of yesteryear. The first polyphonic synth to recreate analogue sounds was Clavia's Nord Lead, which allowed real-time control with dedicated analogue controls, and this machine had the market to itself for over a year (and was recently upgraded to the Nord Lead II). However, the Japanese manufacturers have responded strongly in the last few months, with Roland's JP8000 (a thorough recreation of that company's classic Jupiter 8) coming first. This was swiftly followed by Yamaha's AN1x, a 10-voice synth with particularly good sync sounds -- see the AN1x review starting on page 166 of this issue. You can also read Gordon Reid's preview of the very latest contender, the 12-voice polyphonic Korg Z1, elsewhere in this issue. Whether any or all of these machines can be seen as authentic replacements for the classic synths of yesteryear is a personal opinion, and no doubt the debate on this point will rage long and hard. What is beyond question is that as the second-hand market runs out of bargains (as owners wise up to the value of the pearls they have been sitting on), these new machines offer a very viable alternative, particularly in the modern MIDI setup.
Sometimes analogue impressions of vocal sounds can work better than sampled vocals in a track, because the frequencies affected by the filtering are not directly related to the pitch of the note you are triggering, but dependant only on the filter cutoff. The human vocal chords apply the same resonant filtering effect, and they don't vary this just because you sing a different pitch. Instead, the variation is used to create different vowel sounds, independent of the note being sung. When you play a new note with even the most accurate samples, the resonant frequencies shift in strict mathematical relationship to the transposition from the original pitch. So when you transpose a sampled voice by even a semitone, it sounds more like a different person singing the new pitch, not the same vocal chords. Whilst an analogue synthesizer will rarely be mistaken for human voices, it may well give you a more organic impression of voices used as an ambient background than a sampler whose variations in timbre directly related to pitch jar on the ear. As always, I advise people to steer clear of the idea of using analogue synthesis in direct imitation of a sound. However, analogue synths can be excellent for giving the general impression or feel of conventional instruments without being slavish imitators, especially when placed further back in the mix and given their own ambient space. When trying to produce a vocal effect on an analogue synth, the best results tend to come from those which have a band-pass filter setting or a high-pass and low-pass in series (essentially the same thing). Set the resonance to just under the point where it is about to go into self-oscillation, and then move the cutoff frequency (or frequencies if you're using low-pass and high-pass filters in series) around slowly. With luck you will find a point where a distinct throaty element creeps in. Patience is a definite virtue in the search for this elusive effect, and if the synth you're using has user memories, be ready to save as soon as you find it. If not, then be ready to record the part you want the sound for, as the sound can drift all too quickly on unstable old machines. My favourite machine for this is the Elka Synthex, which had two different widths of band-pass filter, a very stable resonant response, and a ton of user memories. My 'Choirboy' patch, a serendipitous find on that machine, has fooled many an untrained ear (I'm thinking mainly of TV and film directors with that 'untrained' reference, by the way) to the point where I could probably have got away with billing them for a session with Aled Jones or whoever the current pre-pubescent warbler was! The dual filter of the OSCar is another winner for this (moving the Separation parameter controlling the distance between the two resonant peaks can create vowel sounds which give the impression of singing in a foreign language), as are any of the early Korg synths featuring the splendidly-named 'Traveller' (they don't make parameter names like that any more, do they?), which is a disguised high-pass and low-pass filter in series.