You are here

Q. What's the difference between a talk box and a vocoder?

In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.

I've heard various 'talking instrument' effects which some people attribute to a processor called a vocoder, while others describe it as a 'talk box'. Are these the same devices? I've also seen references in some of Craig Anderton's articles about using vocoders to do 'drumcoding'. How is this different from vocoding, and does it produce talking instrument sounds?

James Hoskins

SOS Contributor Craig Anderton replies: A 'talk box' is an electromechanical device that produces talking instrument sounds. It was a popular effect in the '70s and was used by Peter Frampton, Joe Walsh and Stevie Wonder [ see this YouTube video], amongst others. It works by amplifying the instrument you want to make 'talk' (often a guitar), and then sending the amplified signal to a horn-type driver, whose output goes to a short, flexible piece of tubing. This terminates in the performer's mouth, which is positioned close to a mic feeding a PA or other sound system. As the performer says words, the mouth acts like a mechanical filter for the acoustic signal coming in from the tube, and the mic picks up the resulting, filtered sound. Thanks to the recent upsurge of interest in vintage effects, several companies have begun producing talk boxes again, including Dunlop (the reissued Heil Talk Box) and Danelectro, whose Free Speech talk box doesn't require an external mic, effecting the signal directly.

The vocoder, however, is an entirely different animal. The forerunner to today's vocoder was invented in the 1930s for telecommunications applications by an engineer named Homer Dudley; modern versions create 'talking instrument' effects through purely electronic means. A vocoder has two inputs: one for an instrument (the carrier input), and one for a microphone or other signal source (the modulator input, sometimes called the analysed input). Talking into the microphone superimposes vocal effects on whatever is plugged into the instrument input.

The principle of operation is that the microphone feeds several paralleled filters, each of which covers a narrow frequency band. This is electronically similar to a graphic equaliser. We need to separate the mic input into these different filter sections because in human speech, different sounds are associated with different parts of the frequency spectrum.

For example, an 'S' sound contains lots of high frequencies. So, when you speak an 'S' into the mic, the higher-frequency filters fed by the mic will have an output, while there will be no output from the lower-frequency filters. On the other hand, plosive sounds (such as 'P' and 'B') contain lots of low-frequency energy. Speaking one of these sounds into the microphone will give an output from the low-frequency filters. Vowel sounds produce outputs at the various mid-range filters.

But this is only half the picture. The instrument channel, like the mic channel, also splits into several different filters and these are tuned to the same frequencies as the filters used with the mic input. However, these filters include DCAs or VCAs (digitally controlled or voltage-controlled amplifiers) at their outputs. These amplifiers respond to the signals generated by the mic channel filters; more signal going through a particular mic channel filter raises the amp's gain.

Now consider what happens when you play a note into the instrument input while speaking into the mic input. If an output occurs from the mic's lowest-frequency filter, then that output controls the amplifier of the instrument's lowest filter, and allows the corresponding frequencies from the instrument input to pass. If an output occurs from the mic's highest-frequency filter, then that output controls the instrument input's highest-frequency filter, and passes any instrument signals present at that frequency.

As you speak, the various mic filters produce output signals that correspond to the energies present at different frequencies in your voice. By controlling a set of equivalent filters connected to the instrument, you superimpose a replica of the voice's energy patterns on to the sound of the instrument plugged into the instrument input. This produces accurate, intelligible vocal effects.

Vocoders can be used for much more than talking instrument effects. For example, you can play drums into the microphone input instead of voice, and use this to control a keyboard (I've called this 'drumcoding' in previous articles). When you hit the snare drum, that will activate some of the mid-range vocoder filters. Hitting the bass drum will activate the lower vocoder filters, and hitting the cymbals will cause responses in the upper frequency vocoder filters. So, the keyboard will be accented by the drums in a highly rhythmic way. This also works well for accenting bass and guitar parts with drums.

Note that for best results, the instrument signal should have plenty of harmonics, or the filters won't have much to work on.