The evolution of convolution takes several steps forward with Inspired Acoustics’ awe‑inspiring immersive reverb.
It’s amazing how a technology that seems mature can turn out to be anything but. Not so long ago, it seemed, we were all pretty happy as long as our plug‑ins could do a passable impression of a Lexicon, or implement convolution without melting our CPUs. Yet over the last few years, plug‑in reverb has evolved beyond recognition. Only last month, for example, Hugh Robjohns reviewed Nugen Audio’s Paragon, the first IR‑based reverb to use resampling technology. And this month it falls to me to tell you about another breakthrough.
The name Inspired Acoustics may be familiar to SOS readers from their sampled pipe organs, available for the Hauptwerk, GigaStudio and Kontakt platforms. Their latest product is so ambitious that it’s actually unfair to use the word ‘breakthrough’ in the singular, because Inspirata embodies a number of new developments. Like Paragon, or HOFA’s IQ‑Reverb 2, it represents an attempt to make convolution reverb more controllable and more versatile, but the similarities end there.
Conventional convolution reverbs stand or fall on the strength of the supplied IR collection. At first glance, Inspirata’s factory library might sound small, containing as it does only 37 ‘sampled’ spaces (albeit with the promise of more to come). But if you wanted a clue that this is not a conventional convolution reverb, you’d find it in the size of the download: this apparently modest collection comprises well over 100GB of data!
Why is so much data required? Recording an impulse response in a space is like recording any other source. A starting‑pistol shot, sine sweep or balloon pop is emitted from a particular location within the room, and captured by a mic array at another location. So it’s an exaggeration to say that a single impulse response ‘samples the room’. It samples the behaviour of sound emitted from one point within a space, in one direction, as heard from another point, at one point in time.
When used within a convolution plug‑in, that can be a more or less good approximation to the actual experience of hearing music within the space. In principle, a single IR should do a reasonably effective job of recreating how a small source such as a solo piano sounds at a particular point within the hall. What it can’t do, however, is simulate the effect of hearing multiple sources at different positions within the hall, such as the members of a symphony orchestra or choir. Nor can it recreate natural modulation, rotation or movement on the part of either source or listener, or variations in directivity of the source, or the way the acoustic changes in different parts of the hall.
This, in turn, affects the way that mix engineers have to think about reverb. When we’re actually recording in a reverberant hall, we attempt to set the right balance of direct and reflected sound, and to capture an appropriate ambience without...