You'd expect plug‑ins based on IRCAM's research to be unique and experimental — but are they practical tools you'll want to use every day in the studio?
The Institut de Recherche et Coordination Acoustique/Musique, next door to Paris's iconic Pompidou Centre, is one of the world's leading centres for research into all things relating to acoustics, sound and music. Its programme encompasses a huge range of activities, from avant‑garde composition to music therapy, and the Institut has had a hand in numerous developments in music technology. Some of these have fed into Free Software projects such as OpenMusic, but IRCAM Tools is a commercial partnership that sees plug‑in developers Flux package up some of IRCAM's cutting‑edge research into products designed for everyday music‑production contexts.
The IRCAM Tools bundle contains a number of separate plug‑ins, some rather cryptically named, but in essence, presents two types of technology. The Spat and Verb plug‑ins are advanced algorithmic reverbs, while Trax is a suite of three plug‑ins that use novel transformation techniques to alter the character of any source, and particularly a vocal track. All are authorised either to an iLok key or a Flux dongle, and if you don't need the entire bundle, you can purchase Spat, Verb or Trax independently.
There are times when you get to hear a 'buzz' about a product before actually trying it, and so it was for me and Verb. People whose opinion I respect have lauded it as the best algorithmic reverb they've heard, and have mentioned it in the same breath as names like Bricasti.
My efforts to try it out were initially thwarted by the fact that none of the plug‑ins would actually load in my system, but when I reinstalled the IRCAM Tools bundle, everything seemed fine. Verb presents a large and intimidatingly grey window, which contains a large amount of information. Once you get used to it, it actually feels pretty well laid‑out, though many of the controls are very small and some would benefit from stronger colour contrasts; you can switch between 'night' and 'day' views, but even the daylight colour scheme seems to have been modelled on Victorian London in February!
Given its origins, I was half expecting Verb to be based around experimental parameters with names like 'Singe' and 'Arbre', but, in fact, if you've used any reasonably sophisticated algorithmic reverb before, most of what's here will be familiar. That is, apart from one thing. Most algorithmic reverbs generate their reverberation in two stages: an 'early reflections' part, consisting of a number of distinct echoes, and a 'tail' of undifferentiated, diffuse sound. Much of the character of a room is conveyed through the early reflections, so there are usually numerous ways to manipulate their spacing, tonal quality and timing, while the tail is often less controllable. The major innovation of Verb is that it introduces a third, distinct stage, between the early reflections and the tail.
This third stage is called Cluster. Like the early reflections, it is made up of distinct echoes; however, these are fed not directly from the source but from the early reflections. The spacing of the early reflections, Cluster and the reverb tail can be adjusted in a variety of ways, and they can overlap as much as you like, though, naturally, the Cluster can't be positioned before the early reflections, nor the tail before the Cluster. There are also various global controls, such as an overall Reverb Time parameter, which in turn scales the timing of each individual stage. Early reflections and the Cluster have Distribution parameters, which determine whether the echoes are equally spaced or bunched towards one end. Each of the three sections has its own three‑band equaliser, and three further global controls set decay time offsets for the high-, mid- and low-frequency bands — in many real spaces, the reverberation time at low frequencies is longer than it is at high frequencies.
There's also a collection of miscellaneous controls governing such things as the way high frequencies are damped by the air in your virtual room, and the extent to which the reflections and the tail are diffused (more diffusion generally gives a smoother result but with less precise localisation within the space). An Infinite button 'freezes' the reverb so that it seems to go on forever, which can be a neat tool for sound design. At the same time, though, some parameters you'd expect to find on a conventional reverb are missing. For example, there doesn't seem to be any independent control over the levels of the three stages. Nor is it possible to change the shape of the virtual space, or choose different reverb algorithms for different types of space: one algorithm covers everything from bathrooms to ballrooms, and if you want plates or springs, you'll need to go elsewhere. That said, I never found myself pining for any of the missing controls in practice.
The emphasis, clearly, is on simulating real spaces, and this Verb does to stunning effect. You can individually solo or disable each of the three reverb stages, and muting the Cluster — which, in effect, turns Verb into a conventional reverb with early reflections feeding a tail — makes its contribution to the sound abundantly clear. The difference between this and lesser reverbs is perhaps most obvious with recreations of small rooms, which just seem to 'belong to' the source in an uncanny way, with none of the barking mid‑range or coarse reflections that are sometimes obvious elsewhere. In those situations where you just want to add a few early reflections to liven up a source, without any obvious reverb, the Cluster is an absolute godsend. At the other end of the scale, meanwhile, halls feel remarkably rich and luxurious. Above all, most of what you get out of Verb is highly usable, whether in an exposed acoustic setting or a dense rock mix.
If Verb is, at heart, a development of familiar reverb technology, Spat builds on it to create an effect that isn't at all familiar, at least to me. It takes the virtual space created by the Verb algorithm — up to three instances of which run as a part of Spat — and uses advanced psychoacoustic processing to place the listener and the sound source within that space. Internally, its algorithm models various technical acoustical parameters, but these are controlled from a well thought‑out and fairly friendly graphical user interface. Because the reverb in Spat is slightly different to and cut‑down from the full Verb, however, only a handful of its presets are included, which is a shame.
Up to eight input channels can be positioned individually by dragging their icons around on a large graph, which indicates the position of the listener's head and the connected loudspeakers (surround arrays of up to eight speakers are supported). By using the modifier keys, it's possible not only to reposition the sources but to adjust the direction in which they are 'firing', both horizontally and vertically, and the tightness of the pattern in which they radiate. Each source can be made to fire into any one of the three reverb engines available. Most of the action takes place in the large graph area (which can optionally be enlarged), but there are quite a few additional controls that provide reasonably friendly ways of fine‑tuning the spatialisation. In some cases, this involves a balancing act between physical realism and perceptual realism, as the settings that most accurately recreate the way real spaces behave aren't necessarily those that provide the most convincing or useful simulation!
In any case, it's my guess that on trying Spat for the first time, most of us will be too busy dragging the mouse about with our jaws open to worry about equalising the off‑axis radiation patterns of our sources. Many psychoacoustic effects have serious drawbacks, such as working much better on headphones than on loudspeakers, but Spat is truly remarkable. The way the sonic character of the source changes as you move it nearer to you or further away, or turn it around so it's pointing away from you, is almost uncanny even in stereo, and I imagine it would be breathtaking on a properly set-up surround system.
The process is supposed to be '3D', but since it's addressing a two‑dimensional speaker array, it's hardly surprising that the vertical element of the positioning pales next to the horizontal component. There are Elevation and Pitch parameters, which are supposed to control the height and vertical angle of the source within the virtual space, but in practice, no matter what you set Elevation to, the Pitch parameter is perfectly symmetrical, so firing 'vertically up' sounds exactly the same as firing 'vertically down', and neither of them conveys any real sense that sound is coming from above or below you.
Without a three‑dimensional speaker array, it would be asking the impossible to expect a convincing portrayal of height from such a system, but considered purely as a two‑dimensional effect, Spat is hugely effective. Perhaps the only major limitation is that it's impossible to recreate the effect of having a source firing directly into a wall or floor; although the early reflections patterns change to reflect distance and left‑right positioning, there are no virtual surfaces against which to position your sources.
With its impressive surround-sound capabilities, Spat has an obvious market in sound design and music for picture, but it would be a mistake to think that it's not useful within a strictly stereo music‑production context. This might sound far‑fetched, but with a reasonably well‑recorded and dry‑sounding mono source, you can actually use Spat as a combined reverb and retrospective mic‑positioning tool! For example, suppose you have just two tracks, a mono acoustic guitar and mono vocal. Not only can you give each of them their own position within a virtual acoustic, in a fashion that's far more convincing than most reverbs permit, but if the acoustic is a shade on the bright side, you could move it a little further away and rotate it so that it isn't firing directly towards the listener — whereupon it will be convincingly toned down at the top end, just as if you'd moved the mic off-axis a little.
If you want to position more than two sources, you'll need to use Spat on a multi‑channel group or aux track, because each mono source you want to position separately needs to be placed on its own input channel. This might be a headache in some hosts, but worked perfectly in Cubase 6.
You might expect advanced processors such as Verb and Spat to make serious demands on your CPU, but in practice they seemed a little variable on this front. Most of the time there was no problem, but just occasionally, CPU load seemed to leap massively for no obvious reason. For example, I had no trouble moving Spat's virtual sources around in real time, even when I switched its automation into Write mode in order to record these movements; but playing those automated movements back sent CPU load through the roof. Shame, because Spat's realism really shines when you can feel the sources moving around you. (I encountered one or two other glitches, too, like sources sometimes disappearing from the graph, but nothing that really compromised its use.)
Of course, the ability to realistically position sound sources within a virtual acoustic space isn't necessary for or appropriate to all styles of music, and it's not obvious how much use Spat would be in a busy modern rock or pop mix, let alone a dance record. But as a means of using artificial reverb to simulate a real acoustic environment, rather than as 'ear candy', it is quite extraordinary, sounding remarkably natural with no obvious 'processed' or 'canned' quality to it.
Trax is a suite of three plug‑ins that are, loosely, designed to alter the character of source sounds in various interesting ways. The most important of the three is Trax Transformer, which, like Spat, hides a lot of very clever technical stuff behind a control set that refers to the everyday language we use to talk about voices and other sounds.
At its most basic, you tell Trax Transformer whether your source is a vocal (and, if so, whether it's male or female, bass, tenor, soprano and so on), a monophonic or polyphonic instrument, or a full music mix. You then hit Learn, and play a few seconds of the source into Trax so it can evaluate its content. After this, Trax can decompose the input, in real time, into three main elements: pitched content, transients and noise. In the case of monophonic and especially vocal source material, this makes possible the application of quite a large number of possible processes designed to modify the character and timbre of the signal, before a simple mixer allows you to recombine the three elements in any proportion you like.
Perhaps the biggest claim made for Trax Transfomer is that it can realistically alter the character of a human voice. To this end, as well as specifying what sort of vocalist recorded the source, you can specify a 'target' voice of a different character — in principle, allowing you to change the gender and pitch range of the vocals — and apply further adjustments using dedicated controls for things like pitch and formant transposition, the amount of 'breathiness' in the voice, and even the singer's age. There's also a modulation section where you can apply an LFO to the pitch and formant parameters, and something called a Spectral Envelope. This contains a diagonal line that sets the mapping of input to output frequencies, superimposed on a 2D graph with the main formant areas of the human voice marked in blue. By double‑clicking to create breakpoints and moving these about, you can effect wild changes in the sound.
One thing that immediately impressed me is the fidelity with which a vocal track is reconstituted when you don't apply any of this processing. If, for instance, you take a female contralto vocal, set both target and source to 'female contralto' and leave the other controls flat, the results are impressively transparent — more so, I think, than in programs like Meloydne, where a certain 'processed' quality is often audible even when you're not asking it to change anything. The same is true even in Musical mode, where it's being asked to process a full mix.
When you do ask Trax Transformer to change things, subtle transformations are usually more natural than radical ones. Any sort of major adjustment to the transposition and formant parameters sounded pretty jarring to my ears, not to mention those of my long‑suffering wife ("You do know that thing you're reviewing makes everyone sound like Lou Reed on a bad day?”). With the exception of the Young <> Old dial, which has a barely perceptible effect in many cases, many of the controls are really usable only within a small part of their travel, and I wonder whether it might have been better to cut down their parameter range a little, for more accurate control within that range. If you scale down your ambitions, though, it's certainly possible to get useful and reasonably artifact‑free results, especially from the Male <> Female and Breath dials. I was less taken with the 'expression' control, which runs from robotically auto‑tuned to the opposite extreme of wildly exaggerating any pitch variation, and rarely found any use for settings other than the default centre position. This is one area where Trax Transformer's potential hasn't been tapped, as with a bit more control it would make an excellent automated pitch corrector in the vein of Auto‑Tune. However, the Remix section, where you recombine the three components of the signal, offers some interesting possibilities: for example, pulling down the Transients fader a little can be a help when you need to stop a backing vocal distracting from the lead.
This is a complex plug‑in, and I came nowhere near to exhausting its possibilities during the review period. Transparent and useful results are possible within reason, but may take work. It would certainly be possible, for example, to apply different settings to multiple takes of the same vocal part to mimick an ensemble of different singers; and even with absurdly sweeping changes, there are occasional moments of serendipity. Most of my attempts to change the singer's gender or turn a bass into a tenor were failures, but when I tried to turn a female contralto into a male bass, the result sounded surprisingly like SOS's PC expert Martin Walker. And if you abandon the quest for realism, there is a wealth of special effects to be garnered, although it's not always easy to create extreme sounds that are nonetheless intelligible.
In fact, as a generator of special effects, I got more use out of Trax Transformer on instrumental sources than on vocals. Even though you don't get quite so many controls to play with, it's a most effective means of creating bizarre textures and eerie treated noises. For example, by pushing the Breath control up to maximum, you can turn almost anything into an interesting rhythmic loop, while muting the transients adds instant spaciness to any source. The Spectral Envelope really comes into its own on instrumental sources, too, producing all manner of weird and wonderful transformations. For sound designers, I think this could become a must‑have.
Lastly, it shouldn't be forgotten that Trax Transformer is an interesting tool for processing full mixes. As a real‑time pitch‑shifter, it's very impressive, especially when the source material is instrumental, while tools such as the Spectral Envelope and Remix section open up some interesting tonal possibilities. If your CPU has the necessary grunt, you could perhaps even duplicate the track that has Trax on it, then solo the pitched content on the original and the noise and transients on the other, allowing each to be processed independently using other plug‑ins.
My own (admittedly not cutting‑edge) CPU certainly didn't have the legs to do this, and wouldn't even run Trax Transformer in its oversampling mode, which is supposed to offer better quality. Like Spat, Trax also exhibited unexpected CPU spikes occasionally, but I can't think of anything else that does exactly the same as it, or at least, not as well. If you want to make subtle changes to the character of a vocal, or perform naturalistic real‑time pitch‑shifting, you can; if you want to create bizarre robot speech or ambient textures, you can do that too. But be prepared to put in some work.
Fortunately, TraxSF and TraxCS are much simpler plug‑ins than Trax Transformer, at least from the user's point of view. Both are stereo plug-ins designed to process separate mono sources, in the same way that a vocoder uses 'carrier' and 'modulator' signals, so you'll need to set them up on a stereo group or aux channel and hard‑pan the two signals you want to route to them.
TraxCS is designed to 'morph' two source sounds into one coherent hybrid, by using a phase vocoder to blend their dynamic and frequency content. In practice, I found it quite difficult to get good results from it — partly because there are no presets and no suggestions in the documentation about how to use it, and partly because the listener's ears fight against the 'morphing' effect and pick out the original sources as separate sounds. For example, if you try to morph a vocal with an electric guitar, you can tell that the vocal is taking on some of the ringing, middly quality of the instrument, but the actual end result sounds like a singing guitarist playing in a bathroom on a 96k MP3 file. I had slightly better luck combining two instrumental sources; a TraxCS hybrid between a trombone and a bowed cello did at least take on some of the sonic qualities of each source, though I have to say that the resulting trombello wasn't exactly the breakthrough in expressive orchestration I'd been hoping for. Nor, if I'm honest, did my saxaccordion have me reaching for the sampler, and while I've always wanted to combine the clarinet and the Clavinet, the results didn't quite do it for me.
The 'SF' in TraxSF is short for Source Filter, and the idea here is that one source is used to derive an 'excitation' signal, which is then processed by a filter derived from the other source, parallelling the way the human voice is coloured by the vocal tract, or a blown reed by the body of a wind instrument. (At least, I think that's the idea, though it would he helpful if the all‑important Mix parameter was actually documented.)
With vocoding and talkbox effects in mind as a starting point, I tried feeding a vocal to one side and a harmonic‑rich synth sound to the other; mucking about with the controls certainly changed the timbre of the vocal, usually to a thin rasping buzz, but I couldn't honestly say that it ever took on the sonic character of the synth sound. I wasn't sure I liked the effect, but it's certainly very different from a vocoder or talkbox, not least because the source that's being 'vocoded' retains its pitch. I tried with instruments too, but never really achieved something I would want to use in a mix. Once again, there are no presets, nor any guidance about how to get best results, and in the end I ran out of enthusiasm for experimenting before I hit on anything inspirational enough to make me continue.
Assessing IRCAM Tools as a single product is difficult, simply because its component plug‑ins have very little in common, so it's fortunate they're available separately. I certainly have no hesitation in recommending Verb, which is probably the best algorithmic reverb I've ever heard. Although it doesn't offer the same diversity of reverb emulations as other high‑end plug‑ins such as the Lexicon PCM Native Reverb bundle, for convincing recreations of real acoustic spaces it's right at the head of the field. Spat, meanwhile, takes those recreations and makes the whole experience of placing sources within them a thousand times more involving. There's no doubting that it's expensive, but I don't know of anything else that can do what it does — you have to hear it to believe it.
Whereas the appeal of Spat is instant (and lasting!), it takes time to figure out how to get the best from the Trax plug‑ins, and I'm not convinced I managed to do so during the review period. I think Flux have done a pretty good job of packaging up what is obviously a hugely complex piece of software into a comprehensible and intuitive user interface, but it's still sometimes easier to make weird sounds than great ones! Nevertheless, Trax Transformer in particular does some things that no other plug‑in can, and will be a fertile source of inspiration for sound designers and mixers.
There are only a handful of plug‑in reverbs that are in the same league as Verb — perhaps the most obvious would be Lexicon's PCM Native Reverb bundle — and I've never come across a plug‑in that can do what Spat can. There are, likewise, few direct alternatives to the Trax plug‑ins, though for vocal processing, Transformer has some similarities with the likes of Antares' Avox and TC‑Helicon's Voice Modeler.
Like many innovative effects, Spat and Trax Transformer are perhaps easier to understand when you hear them than when you read about them! To that end, I've created a few audio examples to accompany this review, which you can hear at /sos/aug11/articles/ircamtoolsaudio.htm.
- Verb is a stunningly good algorithmic reverb.
- The Spat effect takes the simulation of artificial spaces to a new level.
- Trax Transformer is capable both of subtly changing the character of the human voice and creating weird and wonderful effects. It's also a very good real‑time pitch‑shifter.
- Results from the TraxCS and TraxSF plug‑ins are hit‑and‑miss.
- More presets and better documentation would be welcome.
- CPU‑intensive, and sometimes prone to CPU spikes in the review computer.
Flux have done an impressive job of packaging IRCAM's technologies in a studio‑friendly format, and in doing so, they've set a new benchmark for plug‑in reverbs.
- Dell XPS laptop with 2GHz CPU and 4GB RAM, running Windows 7 Home Premium.
- Tested with Steinberg Cubase 6.