Have you ever wondered exactly what goes on when classic hardware is recreated in plug-in form? Here's the full story from some of the industry's biggest names.
Plug‑ins that emulate classic hardware have brought countless studio legends within the reach of everyone. But how do plug-in developers go about capturing the magic of the originals, and what compromises are involved in the process? We talked to the designers whose job it is to turn valves and transformers into cold, hard DSP code...
Whether the aim is to make an exact recreation or to take the essence of a classic piece of gear and build on it, there's far more to the process of designing a plug-in to emulate a hardware processor than meets the eye...
There are two basic ways for a plug‑in designer to develop a processing module to mimic the sonic performance of a specific analogue or digital processor. One is to pass a variety of static and changing signals through the device, measure the input‑to‑output characteristics for all front‑panel settings, and then develop DSP (Digital Signal Processor) code that accurately emulates those changes. The other is to examine the circuit diagram and model the various component blocks (using one of several commercially available programs), to generate a transfer function from input to output(s). That mathematical function can then be used to generate the DSP routines that emulate the device in question. Most developers combine both techniques, along with a lot of code refinements based on prior modelling experience, and some intensive listening sessions.
"Most classic equipment can be physically modelled in the digital world,” says Dave Berners, Chief Scientist at Universal Audio, a firm who've been modelling analogue equipment for more than 10 years, including their own classics, the 1176LN and LA series of analogue compressor/limiters. "In general,” he continues, "analogue equipment that exhibits high‑bandwidth, non‑linear behaviour presents the biggest challenges in creating accurate models. But it's often the sound of these non‑linearities that makes the original analogue equipment so desired. Put simply, the more non‑linear the behaviour, the more complex the physical model that's required, and the more processing power needed.”
One class of effect that remains too complex to be explicitly modelled is real acoustic spaces. "Acoustic spaces are more easily [modelled] using statistical analysis or direct measurement via sampling,” states Berners, who teaches courses in DSP and physical modelling at Stanford University's Center for Computer Research in Music and Acoustics (CCRMA).
"Most models assume a great deal of 'linear' behaviour, which is easy to capture; the trick is in non‑linear models,” confirms Edmund Pirali from Intelligent Devices, who last year launched a plug‑in that emulates the highly regarded Marshall Time Modulator, an analogue effects processor designed in the mid-'70s by the late Stephen St Croix. "You can try to really model what is going on, but that is just too complex and computationally intensive. So the common approach is to either interpolate between varying linear models, or develop methods of 'faking' the non‑linear behaviour. Since all analogue devices were made from components that have mostly defined behaviour, digital models of the components are not out of the question. The issue is catching the drifting, shifting, unpredictable behaviour, especially as capacitors and parts age in older devices. We modelled the behaviour of the Time Modulator's various sub‑sections. Most of our time was spent in developing correct models for various behaviours of the components used in the original unit, and then coming up with relatively efficient algorithms to model complex non‑linear behaviours.”
Udi Henis from Waves, who offer a number of highly regarded classic processor plug‑ins, including an Artist Signature Collection developed in collaboration with award‑winning producers, considers that successful modelling depends on a number of factors. "For example,” he offers, "devices that have stepped controls can be more consistent when it comes to recalling certain settings, whereas with continuous controls it's more difficult to achieve exact recalls, with the more reliable settings being at the extremes, and intermediate settings [needing to be made] as accurate as possible to achieve a comparable feel of operation.”
"As long as it's a part of universal physics, an effects unit can be modelled,” asserts Niklas Odelholm from Softube, a firm who offer a number of emulated effects and guitar amplifiers, as well as developing plug‑ins for Abbey Road and other companies. "The guys at CERN [the European Organisation for Nuclear Research] are doing their best to model extremely stochastic behaviours of fundamental particles, and modelling audio gear is not much different. A plug‑in modifies signals in the same way as the original unit, and this is done by working the maths. There are limitations to what is feasible to model in terms of model complexity, which manifests itself as work hours and/or CPU load. For example, to accurately model how sound waves are being distributed, delayed and changed through a room is extremely power‑consuming; it will probably take quite some time before we get accurate models for that. First you master the equations, then you find a way to express these in 'C' code, and then you are — almost — ready to go.”
"Anything created in hardware can be recreated in software,” declares Colin McDowell, CEO and founder of McDSP, whose plug‑ins emulate vintage equalisers, compressors, tape machines and amplifiers. "Writing software is a much more fluid engineering method and, by its very nature, more flexible than a fixed hardware design. Furthermore, the notion that 'classic' analogue gear is capable of doing something that the average computer cannot do is outdated. Limitations, if any, only exist in the imagination, experience and creativity of the engineer(s) making the audio plug‑in.”
"Practically, we are limited by computer CPU power and by our understanding, or lack of understanding, of the underlying devices,” says Ken Bogdanowicz from SoundToys, who offer a range of emulated effects units. "If a processor is linear and static, we can completely model the device by measuring its transfer function using sine sweeps or wideband test signals. If the device is time‑varying or non-linear, or both, we need to understand its workings. Sometimes this can be done with a schematic and in‑circuit measurements. Other times this is much harder, like modelling analogue tape, or some old hardware digital reverb units, where the underlying algorithm is unknown.”
Sometimes there are limitations caused by not having enough information, including variations in input or output impedance. "Plug‑ins generally act like they all have perfect line buffers between them,” says David Tremblay, Avid's audio DSP software engineer. "In the real world those devices can interact, resulting in changes in frequency response or distortion characteristics. Theoretically, this can be modelled, but usually the algorithm doesn't know what it is being plugged into. This is one of the reasons we added 'True‑Z' impedance matching to our Eleven Rack.”
"We could have modelled the guitar/amp impedance interaction,” adds Avid's Chris Townsend, "but we'd have had to know the impedance of the guitar [at all frequencies], which is not even remotely practical.”
"Of course,” Tremblay states, "the other obvious limitation is time, and how much of it is required to extensively emulate these devices or processes. Theoretically, you can spend years covering every possible use or behaviour but often that's not sustainable from a business perspective.”
"We rarely do real modelling of existing analogue equipment,” says PSP Audioware's master developer Mateusz Wozniak. "In most cases we don't want to imitate specific hardware, but rather port the best analogue sound features to the digital domain. If we find the right solution and it sounds good, it is good.”
"We listen to the target and become familiar with it,” says McDSP's McDowell. "The modelling process is entirely a 'black box' approach: we send a variety of signals into the device, measure how they are affected at the output, and then begin the work of creating a process in software that does the same thing. Some input signals are very simple, and others are complex, but the idea behind each is to pull out a characteristic of the device we're modelling. Sure, there are the textbook ways of modelling — getting a step response, or a frequency sweep — but many of those digitally created signals are not the kinds of input the modelled device is meant to handle.
"It's crucial to know what parts of a measurement are the 'real' responses, and what parts are in fact artifacts from the test signal — or even better, know how to create test signals that are within the operating range of the target device. Honing in on what really makes a device sound good, removing its bad parts (if it has any) and understanding how our test signals are being affected by the device is a good way to make software that people like to use as much as the hardware!
"Other companies prefer the circuit part‑by‑part approach. The fun in this is you get to literally rebuild the device, and tune the parameters until the output of the modelled version and original are nearly equivalent. Which approach is better? Circuit‑by‑circuit lets you create something that is not as optimised as a black-box approach, [but] then there's that whole intellectual property [aspect]. With a black-box model, you've recreated something based on no more knowledge than what goes in and out of the box. If you 'borrow' someone's schematic and rebuild it [in a plug‑in] what have you done? The latter just doesn't seem as creative, and is more like a knock‑off than a new engineering design. If your company owns those original schematics, that would be different; it's porting a product to a new platform, rather than 'borrowing'.”
Designer and producer George Massenburg of GML isn't convinced that classic hardware can be — or even should be — modelled. "My design for the MDW digital parametric equaliser [plug‑in] was not only a new design, but also a significant departure from previous work in the field,” he claims. GML's soon‑to‑be‑released dynamics processor is less a 'model' or emulation than "an enhancement and extension of the underlying mathematical building blocks of the analogue original,” he says. "Some modelled EQ plug-ins work OK” — particularly the offerings from Universal Audio, Masssenburg concedes — "but the best work is less 'modelled' or 'emulated', and more just plain well-engineered.
"How well any digital processor performs depends on several factors, including the available processing power,” he continues. "A DSP engineer with very good ears generally does better than the guy staring at MATLAB emulations. Modelled equalisers might work but, to my ears, it is next to impossible to successfully and persuasively model an analogue compressor-limiter.” [MATLAB is a proprietary development tool that's said to perform computationally intensive tasks faster than traditional programming languages.]
The Transfer Function
At the heart of the modelling process is the development of DSP code that mimics the frequency response to time‑variable signals, in response to adjustments via some form of graphical user interface — in other words, the on‑screen knobs and switches. If a circuit diagram can be located for the device that is to be modelled, the first step is to use experience and/or a commercially available program — and usually a combination of both — to develop a multi‑dimensional differential equation that relates output levels to variations in input levels and control parameters. That mathematical expression is the basic 'transfer function' for the device, which can then be modelled in DSP code.
SPICE (Simulation Program with Integrated Circuit Emphasis) is a popular open‑source program used to check the integrity of circuit designs and predict their behaviour. After details of the components in a circuit have been entered, the simulation software generates various I/O plots that can be used to derive mathematical transfer functions. Other programs offer variations on that basic theme, but do the same job. The modelled circuits can contain most types of active components, independent voltage and current sources, dependent sources, and lossless and lossy transmission lines. A separate compiler is required to create executable code that runs natively on a PC or DSP system.
To initiate the modelling process, many developers start with a careful examination of the original circuit topology.
To initiate the modelling process, many developers start with a careful examination of the original circuit topology. "We start by reviewing original schematics, which we often get directly from the original hardware designers, to predict sources of non-linearity,” UA's Berners confirms. "If the hardware contains non‑linearities, we must develop a set of test signals that will expose them. As a result, each device we model will have its own set of test signals; we wouldn't use the same signals to characterise an 1176 and an LA2A, for example, even though they are both compressors. Sometimes we disassemble a device in order to make proper measurements: we disconnect compressor side-chains, remove bias circuits from tape‑loop delays, unsolder saturating inductors, and basically do whatever it takes.
"Generally speaking, our analogue emulations employ physical circuit modelling rather than signal modelling. Signal modelling is fine for linear systems, but inadequate for non‑linear systems. In fact, for equipment with unknown non‑linearities, like a vintage analogue compressor, a full characterisation cannot be made with a finite number of test signals.
"Even if the original hardware is highly linear,” continues Berners, "an accurate model must account for any component tolerances and parasitic effects. For example, an inductor, even if highly linear, will have resistance and capacitance associated with it.”
"Essentially, all models are more or less complex differential equations that attempt to make the output of a [software] model equal the output of the 'role model',” says Waves' Henis. "In the earlier stages of modelling, the algorithmic work is usually done in flexible but not necessarily efficient mathematical environments, such as MATLAB. This can help achieve proof of concept for a modelling method, technology or engine, before coding a more specific and efficient — but much less flexible — process function. Through our modelling experience we ended up using multiple techniques for modelling the same type of processors. For example, our SSL, API and PuigTec equalisers use different filtering engines to achieve the best modelling of their reference role model.
"Additionally, the actual component values may not match up with the provided schematics. By making multiple input‑output measurements, we can use best‑fit algorithms to calculate actual component values, including parasitics. This method is more accurate than removing components and measuring them individually; and some electrical properties depend on components being mounted in the circuit to be measured.
"Getting the right 'role model' or reference device, and verifying that it's in the desired condition, is critical. Many times we have to consult with experts that have real mileage using the original devices to determine that we have a good device. Sometimes we travel great distances to find the right unit.”
Wave Arts' president Bill Gardner: In the not‑so‑distant future we'll be able to model any vintage analogue gear by simply entering its circuit schematic and running it in real time.
Wave Arts' president Bill Gardner says that such circuit simulation is "computationally expensive; the expense grows as the cube of the number of circuit nodes. For example, our Tube Saturator plug‑in has six circuit nodes in the non‑linear portion, containing two 12AX7 pre‑amp stages. This requires about 33 percent of a 2.8GHz P4, processing mono 44.1kHz samples; doubling the size of the circuit requires eight times the CPU. A complete tube amp, like a Fender Champ with 24 circuit nodes, would require 64 times the CPU — or about 20 CPU cores! However, CPU power will continue to increase. In the not‑so‑distant future we'll be able to model any vintage analogue gear by simply entering its circuit schematic and running it in real time. And, in many ways, these simulations will be better than the original circuit: exact component values, perfectly matched components, no noise...
"Analogue circuits, even non‑linear ones, are bound by physical laws, and hence their behaviour can be modelled to arbitrary precision using circuit‑simulation techniques. The schematic completely defines the circuit behaviour. It's no longer necessary to apply ad hoc DSP modelling techniques and hope you get something that sounds right: non‑linear components (diodes, transistors, tubes...) can be modelled on a workbench, typically by applying voltages and measuring currents, yielding a one‑ or multi‑dimensional table that describes the current‑voltage characteristic of the device. Linear passive components (resistors, capacitors, inductors, transformers...) have known, ideal mathematical behaviours. An op-amp is an example of an active component that has a very well-defined mathematical behaviour. However, inductors and transformers can have complicated magnetic effects, such as saturation and hysteresis, that may require a more detailed model. If it contains only linear elements or known non‑linear ones, it's only necessary to input the schematic.”
"It's very useful to have a schematic,” Waves' Henis agrees. "This can help figure out some technicalities and even create a general model that will require further tuning and tweaking. Sometimes we know that we will get the role-model hardware only for a limited amount of time, and we usually target that for the latter stages of the modelling process; meanwhile, we use recordings and measurements as references, and sometimes a stand‑in that will substitute for the rare and unique role model.”
Thomas Valter of TC Electronic says the company use proprietary tools to generate an initial model and to carry out listening tests.
"The main part of algorithm development at TC is done without a specific target — such as a plug‑in,” he says. "Algorithms often start out living inside a computer that simply streams audio through the algorithm, so we can listen, and make adjustments to achieve maximum sound quality. Quite late in the process we do the plug‑in implementation; we almost always start out with a generic [user interface] — only faders and buttons.
Towards the end of the process, we add the full GUI,” and port the DSP algorithm to a specific DAW plug‑in. "That's the easy part: the hard part is to get the core algorithm to behave like we want.”
How Well Does The Plug-in Model Mimic The Hardware?
With the processor modelled in DSP code and ported to the appropriate plug‑in format, there follows the final stage of evaluation — finding out exactly how well the plug‑in mimics the original hardware. So how do the designers approach this?
”We go through many iterations of measuring, listening, identifying strengths and weaknesses,” explains Waves' Henis. "Sometimes algorithmic additions or changes will take place until the model and original are virtually indistinguishable. The basic qualities we evaluate first are those of frequency response, linearity and THD [Total Harmonic Distortion]. Phenomena considered to be side‑effects, such as noise and hum, have a critical effect on how a signal sounds, [so] we do our best to make these match.”
TC Electronic use a combination of critical listening and measurement: "Our 'golden ears' are able to hear the most detailed parts of the algorithm,” Valter says, "and to communicate with the programmers about the parts that need attention. We do measure whether a plug‑in is able to modify signals in the same way as the original, but at the same time insist that professional sound engineers listen to the result — and like it.”
"Each [modelled] device is unique and poses its own problems,” states Steinberg's Andreas Mazurkiewicz. "Listening is important; viewing too. Sometimes I write special tools to analyse a tough problem, when a signal generator and a spectrum analyser are not enough.”
"Modelled plug‑ins are evaluated against originals,” McDSP's McDowell says, "as well as how they hold up on their own. Even more often, we hold listening tests [with] the owners of those originals, since experienced ears can sometimes pick out differences that our observations could not. Recently, we've even taken the approach of intentionally creating an altered design: our 6030 Ultimate Compressor has some emulated [dynamics], but for each model we've tweaked it to be different from the original — perhaps some more sensitivity in the mid‑range, less pumping in the low end, and so on.”
"A typical error might be that the distortion just doesn't sound right, or the attack in a compressor is a bit off,” says Softube's Odelholm. "We try to find the components or circuits that are causing the difference. For instance, there was a small difference in the way our 1176 model distorted for fast release times. We were able to track that down to a 2mV bias difference in the hardware's detector circuit. We inserted those 2mV into the virtual model and got it right. For the RS127 Green from the Abbey Road Brilliance Pack, we had a difference between the measurements and simulations, but only at a certain setting. We measured all the components again, and checked that the hardware matched the schematic. Everything looked OK, but it still sounded different. Finally, we realised that it was due to a resistor that came loose when the dial was set at that specific position; when we measured that resistor we had to set the dial to another position. The way we found that out was by solving an equation system — we just had to insert that loose resistor in the schematic to make the model work just like the hardware! When there is an error in a model, it usually leads to lots of errors; a difference in frequency response at the input transformer will lead to different distortion in the next stage, which will make the release times longer, and so on. Once you find that error, everything else falls into place. That's the strength with component‑modelling, as compared with signal‑modelling approaches.”
"We're simultaneously listening for positives and negatives,” explains Alex Westner, Cakewalk's director of product management. "It's important to measure and analyse frequency response, dynamic response and temporal responses of the model. Subjectively, we're paying attention to the desired analogue characteristics, such as warmth, saturation, desirable distortion and 'character', but we're also listening for 'pollution', [such as] aliasing and quantisation noise. Sometimes listeners can't put their finger on what they like or don't like, so we'll tweak our model towards favouring their responses, rather than staying true to what it is we were initially trying to model.”
"We listen first, and measure second,” says SoundToys' Ken Bogdanowicz. "We listen to a wide variety of sources — tones, drums, vocals, guitar and full mixes — at a wide range of levels and settings. If it doesn't sound right, we'll turn to our measurement tools, to try to zero in on why it's not sounding right.”
"Our listening team works with [UA] engineers throughout the process to expose potential problems,” Universal Audio's Dave Berners tells us. "One advantage of using physical modelling is that the familiarity gained by circuit analysis gives our engineers a good intuition about the equipment's behaviour, so problems found during listening sessions can be identified more easily. Of course, we also work with the original hardware equipment designers like Neve, Manley Labs, Empirical Labs, Roland and EMT, to ensure that the final results meet both UA's standards and their own.”
Emulation Accuracy: The Debate Rages On...
Whether the aim is to make an exact recreation or to take the essence of a classic piece of gear and build on it, there's far more to the process of designing a plug-in to emulate a hardware processor than meets the eye. An intimate knowledge of the way the original system functions is essential, as is a full understanding of how to model that time‑dependent behaviour to match a full range of input signals and front‑panel settings. But, having developed a series of multi‑variable differential equations that mathematically represent that processor's fundamental reasons for being, the translation into processing code takes time and a lot of experience. The development team will need to listen to the result for all user settings, and ensure that the in‑progress code mimics the targeted device in every important aspect. Only then will it be released to end users.
Debate over the precise degree of accuracy of these emulations is unlikely to go away, but while the coveted originals are often rare and expensive, modelled plug‑ins provide a means for every DAW user to access their distinctive character. Plus, of course, we can use as many instances of them as our computers can handle!
Mel Lambert has been intimately involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is now principal of Media & Marketing, an LA‑based consulting service for the pro‑audio industry, and can be reached at email@example.com.
Native CPU Or DSP?
Once a basic algorithm has been developed and evaluated, the code will be ported or cross‑compiled to an appropriate format for the host CPU or a DSP card. "Writing highly efficient code can be cumbersome for both host- and DSP‑based systems,” says UA's David Berners. "Some DSP systems run in fixed-point number systems which, in certain cases, can be inconvenient; our DSPs, by contrast, run in floating-point, which avoids that problem.” Since the firm's UAD cards use only one type of [SHARC] DSP chip, "we only have to optimise our code for one system. As a result, we can write our code by hand, which is very tightly optimised, versus code generated by a compiler.”
"When we implement algorithms for native processing,” says Waves' Udi Henis, "or processing carried out on CPU-type processors, and hardware-based TDM systems, each will be optimised for efficiency. Native processing power has become so huge that computer performance has left dedicated mathematical processors way behind. Implementing plug-ins to run only on dedicated hardware cripples the possible scope offered by native processing. The best model allows both; our TDM licence lets the process run on [Pro Tools DSP cards] and on a host CPU.”
"It's generally more difficult to develop algorithms for processor cards or embedded devices,” Avid's Chris Townsend says. "There's generally less memory, no operating system and less robust development tools. On the other hand, DSP hardware can be specifically designed for the application and may be highly optimised. For Pro Tools TDM plug-ins, the audio processing algorithms are the only code that runs on the card. For embedded products, such as Eleven Rack, there's a user interface, preset handling and control processing, in addition to the DSP algorithm processing.”