I have noticed that different mixing consoles and multitrackers have different kinds of faders — long- and short-throw, motorised, touch-sensitive, conductive plastic, and so on. Clearly, not all faders are created equal, but what are the essential differences?
Technical Editor Hugh Robjohns replies: On the first sound mixing consoles, up until around the 1950s, the faders were actually large rotary knobs, because that was all that the engineering of the day could manage. Nevertheless, rotary controls are extremely ergonomic to use — a simple twist of the wrist provides very precise and repeatable settings of gain. The downside, of course, is that only two rotary faders can be operated independently at the same time by one person because most people have only two hands.
The rotary fader changed the level of the audio signal by altering the electrical resistance through which it had to pass, that resistance corresponding to the fader position in a logarithmic way, and this changing resistance was usually acheived by using a chain of carefully selected resistors mounted between metal studs which were contacted via a moving wiper terminal driven directly by the rotary control. This arrangement typically provided 0.25dB to 0.75dB between individual stud positions, so that as the control was rotated the gain jumped in small steps, each being below the amount of abrupt level change that most people can detect.
The next stage in fader development was the quadrant fader, popular through the 1960s and early 1970s. Superficially, this arrangement was much closer to the concept of a linear fader which we have today, except that the knob on top of the fader arm travels along a curved or arced surface rather than a flat one. You can see two sets of four quadrant faders in the central section of the EMI REDD 17 desk pictured here. One big advantage of this quadrant fader approach was that the mechanism itself is quite slim, so these quadrant faders could be mounted side by side with a fader knob more or less under each finger of each hand. This allowed the operator to maintain instant control of a lot more sources at once, and to assess their relative balance visually. Again, a travelling wiper control by the fader knob traversed separate stud contacts arranged in an arc, with resistors wired between the studs to create the required changing resistance.
The more familiar linear or slider-type fader we all take for granted today was developed in the 1970s, with the control knob running on parallel rails to provide a true, flat fader. By this time the stud terminal had been replaced in professional circles by a smooth resistive track made from a hard wearing conductive plastic material, providing far better consistency and a longer operating life than the simpler carbon-deposit track used in cheaper rotary controls and faders. However, both of these mechanisms provided a gradual and continuous change of resistance, rather than the step increments of the stud-type faders.
The mechanism of a slider fader is relatively complex, and economies can be made by using shorter track lengths, hence a lot of budget equipment tends to employ 'short-throw' faders of 60mm or so, rather than the professional standard length of 104mm. Obviously, the longer the fader travel, the greater the precision with which it can be adjusted.
With the introduction of multitrack recording, mixing became increasingly complex and mix automation systems started to emerge in the late 1970s and 80s. These often employed voltage-controlled amplifiers to govern the signal levels of each channel, rather than passing the audio through a fader's track — the faders simply generated the required control voltages. However, the performance of early VCAs wasn't very good, and so an alternative arrangement was to fit motors to the faders so that the channel levels could be controlled directly by the fader track in the usual way.
Besides the benefits in audio quality, this approach also enabled the engineer to see what the mix automation was doing visually from the faders themselves, rather than just on a computer screen. Conductive knobs were also introduced so that the fader motor control system would know when a fader was being manipulated by hand, and so drop the appropriate channels into automation-write mode while simultaneously disabling the motor drive control so that the fader motors wouldn't 'fight' the manual operation.
When digital mixing consoles were developed, the audio manipulation was performed in a DSP somewhere, so audio no longer passed through the faders. Some systems use essentially analogue faders to generate control voltages — much like the early VCA automation systems — but the control voltages are then translated into a digital number corresponding to the fader position with a simple A-D converter. This fader position number is used as the multiplying factor to control the gain multiplications going on inside the DSP. Some more sophisticated systems employ 'digital faders', many of them using contact-less opto-electronics. A special 'barcode' is etched into the wall of the fader, and an optical reader is fixed below the fader knob so that as the fader is moved, the reader scans the barcode to generate a digital number corresponding to its position, which, in turn, controls the DSP.
Being digital, the faders output a data word, and the length of this word (the number of bits that it is comprised of) determines the resolution with which the fader's physical position can be stated. Essentially, the longer the data word, the greater the number of steps into which the length of the fader's travel can be divided. More subdivisions, in turn, mean more precision in the digital interpretation of the movement of the fader knob. Audio faders are typically engineered with eight-bit resolution, providing 256 levels, but some offer 10-bit resolution, which translates as 1024 different levels. In crude terms, as an audio fader needs to cover a practical range of, say, 100dB, then an eight-bit fader will provide an audio resolution of roughly 0.4dB per increment. In other words, the smallest change of level that can be obtained by moving the fader a tiny amount would be about 0.4dB. A 10-bit fader would give 0.1dB resolution per increment, but these are both well below the typical level change that people can hear. In practice, there is also a degree of interpolation and smoothing performed by the DSP, so the actual level adjustment tends to be even smoother, and 'stepping' is rarely, if ever, audible in modern, well-designed systems.
One other thing worth mentioning at this point is that the fader's resolution — whether it's a digital or analogue fader — changes with fader position. The fader law is logarithmic so that a small physical change of position while around the unity gain mark on the fader (about 75 percent of the way to the top, usually) changes the signal level by a fraction of a dB, whereas the same physical movement towards the bottom of the fader might change the signal level by several dBs. This is why it is important to mix with the faders close to the unity gain mark, since that is where the best resolution and control are to be found.
Going back to the idea of the touch-sensitive fader, which was first developed for fader automation systems, this has also become popular in digital consoles which use assignable controls. By touching a fader, the assignable controls can be allocated to the corresponding channel, obviating the need to press a channel select button and, in theory at least, making the desk more intuitive and quicker to operate. However, if you are in the habit of keeping a hand on one fader while trying to adjust another, this touch-sensitive approach can be a lot more trouble than it is worth. Fortunately, most consoles allow the touch-sensitive fader function to be disabled in the console configuration parameters.