We’re used to playing and sequencing synth parts in our productions, but why not sing them as well?
It might seem obvious to say it, but the human voice is often a uniquely expressive element in recordings and musical productions. Only a few instruments, mostly acoustic, get anywhere close to its capacity for inflection or (to use an equally valid term from the synth world) modulation. Pitch, timbre and intensity can all be exquisitely controlled, and that’s to say nothing of the extra layer of communication that comes from word‑generated meaning and imagery.
How can we make use of that potential, in electronic‑leaning, synth‑based productions, even if (and I speak from personal experience here) we’re not necessarily a good singer ourselves? That’s what this article is all about. We’ll look at some of the interesting gear out there, and a range of approaches that can open up creativity‑loosening possibilities in this area.
Getting straight to the heart of the matter, it’s possible (and can be really liberating) to play synth parts with your voice. Or for that matter many other melody instruments too, like sax or flute. You regard the voice or instrument as a ‘front‑end’, an alternative to a MIDI keyboard, or use an existing recording of it in a DAW track.
In lots of modern DAW software, a pitch‑to‑MIDI ability is built in as an offline process, and the results are often really good. You start by recording your voice (or guitar, sax, kazoo... OK, maybe not kazoo) to an audio track. Perhaps after some kind of analysis takes place the pitch information can then be extracted to a MIDI/instrument track. The process varies from DAW to DAW: in Ableton Live, for example, the MIDI track, data and a placeholder virtual instrument are all created for you with one command; in others you might have to drag an audio region to a MIDI track and instantiate or configure an instrument of your choice.
Pitch‑to‑MIDI tools... make for intriguing and potentially fruitful alternatives to MIDI keyboard controllers.
For some tasks, this could be enough for great results. A simple monosynth line, or a decaying bass sound, could work straight off the bat. For more overtly shaped, expressive synth lines though, you might well want to do some additional work on the pitch‑to‑MIDI generated data.
For example, even very smooth, legato‑style singing (or playing) will tend to generate individual MIDI notes whose lengths abut each other at most. As a result nuance such as legato and portamento transitions are lost, not to mention vibrato, bends, and variations in intensity.
On that first point, it becomes an issue if you’re driving a synth sound with any obvious attack, like a ‘wow’ filter sweep or short‑lived percussive element. You’ll almost certainly get a re‑trigger on every detected change of pitch, even if the sung notes were connected in legato fashion, or you just introduced some subtle bends or fall‑offs.
So a good solution for expressive results is to use a monophonic synth or other solo sound, and switch in its legato option. Then looking at your MIDI data in a typical ‘piano roll’ editor consider which pairs or groups of notes should be connected without a retrigger of envelope generators or a sample start, for best musical effect. Extend these notes’ ends a little past the start point of their immediate neighbours to the right: that will be enough to cause the legato transition on the synth. Adding some portamento/glide in the synth patch can add a somewhat vocal‑like ‘swoop’ too, that you can trigger at will.
As for additional expression and shaping, well, some DAWs do extract level information from your audio and pass it on in the form of varying note velocity. It’s a start, but you might still choose to write in MIDI CC or automation data for synth parameters, such as volume, filter cutoff, or vibrato depth and speed. In no time at all that could give you really dynamic results....