When is the professional standard signal level no longer standard? When you want to connect to digital equipment! Hugh Robjohns explains the mysteries of exchanging signals between analogue and digital domains.
A common inquiry from Sound On Sound readers concerns optimising the exchange of signals between an analogue console and an A-D converter or soundcard. The classic problem is that recorded digital signal levels are pitifully low, while ludicrously high replay levels from the digital machine stress or overload the console returns.
If you have found yourself in this situation, take comfort from the fact that it afflicts us all. In fact, I recently had to realign the Apogee converters in my own location-recording setup to overcome this very problem. In this article, I'll start off by looking at the existing standards for analogue signal levels and how they came about, before examining the ways of addressing the problems that result.
The Great Analogue Standard
There is a proverbial saying in the audio industry along the lines of "We like to have standards... that's why there are so many of them!" As far as analogue signal levels are concerned, though, there are only two to worry about: +4dBu and -10dBV, respectively the professional and semi-professional standards. But what do these levels actually represent?
The reference point in any decibel scale is always 0dB (see the box on decibels) and a suffix letter is used to denote the chosen standard. The earliest standard was set in telephone engineering, when input and output impedances on all equipment were standardised at 600(omega). It became the norm to use 1 milliWatt as the reference power level for speech signals over telephone lines. This, when translated to an RMS (averaged) voltage, measures 0.775V across 600(omega) (Power= Voltage2 / Resistance). The letter 'm' was used to denote the 1 milliwatt reference, and hence the reference point was referred to as 0dBm. Professional audio eventually settled on 4dBm as the standard calibration level and this is the level indicated by the zero mark on a VU meter: 0VU = +4dBm = 1.228VRMS. The audio industry no longer uses 0dBm as its reference standard but 0dBu. This uses exactly the same reference voltage of 0.775V, but is no longer tied to any particular impedance.
What does all this mean in practice? Well, if you have a mixing console with a test oscillator function producing a tone showing 0VU on the output meters, the main outputs should measure +4dBu, or 1.228VRMS. The vast majority of professional equipment expects this as the nominal signal level, which means we can align input and output levels to exhibit unity gain throughout a signal chain. In other words, you can pass signals between equipment and know that you won't overload anything or disappear into the noise floor.
For the sake of completeness, the semi-pro level standard of -10dBV was adopted for unbalanced signal interfaces using much simpler (or cheaper) circuitry. The reference point here is a 1 Volt RMS signal instead of 0.775V (hence the V in dBV). The standard -10dBV level equates to 316mVRMS which is about a quarter of the voltage of the professional +4dBu reference level, or almost 12dB lower.
One of the 'features' many engineers like about analogue systems is that if you drive them hard, the quality of the sound changes in an interesting way: in general, analogue systems overload progressively, the distortion artefacts building in proportion to the signal level. However, this is really only a special effect and, normally, we try to avoid overload distortion. To that end, system designers create a safety buffer called 'headroom' which allows signal peaks higher than the nominal level to be accommodated without distortion.
Signal level in dBu = 20 x log (signal voltage / 0.775) Semi-pro levels can be calculated with the following formula: Signal level in dBV = 20 x log (signal voltage) These might look like complicated equations, but any half-decent calculator will be able to cope, and using decibels makes life a lot simpler. If you need convincing, consider a -50dBu microphone signal (0.00245VRMS) which is made 10dB louder (0.00775VRMS). Raising a +4dBu line level by the same 10dB produces a similar perceived increase in level, but the voltages change from 1.228VRMS to 3.884VRMS. I know which I find easier to relate to!
The ear perceives level changes in a logarithmic, rather than linear, fashion. Consequently, it makes a lot more sense to measure audio signals using a logarithmic scale, rather than as straight signal voltages. In the case of professional audio signals, decibels and signal voltages are related by the formula:
Signal level in dBu = 20 x log (signal voltage / 0.775)
Semi-pro levels can be calculated with the following formula:
Signal level in dBV = 20 x log (signal voltage)
These might look like complicated equations, but any half-decent calculator will be able to cope, and using decibels makes life a lot simpler. If you need convincing, consider a -50dBu microphone signal (0.00245VRMS) which is made 10dB louder (0.00775VRMS). Raising a +4dBu line level by the same 10dB produces a similar perceived increase in level, but the voltages change from 1.228VRMS to 3.884VRMS. I know which I find easier to relate to!
In a typical mix, your music peaks might illuminate the yellow LEDs on your mixer's bar-graph meters at +10VU (+14dBu) but the VU is an averaging meter and thus barely responds to fast transients at all, so the true peaks will be very much higher than the meters suggest. However, the overload characteristics of most analogue systems means that any distortion artefacts will probably go unnoticed.
Even if your console meters are 'peak-reading' types they will almost certainly have a short but finite 'integration time', which means they will still under-read on brief transients, typically by around 6dB. For music hitting the +10 LED on peak-reading meters, the true signal peaks will be reaching the +20dBu mark (+14dBu plus 6dB overshoot), which is only 2dB below clipping in a typical system (or maybe even 2dB above in a poor one!).
Enter The Digital Converter
In the world of digital audio, overloads are not musically interesting they are horrid, unmusical and unpleasant things that really must be avoided. Since digital systems can not record audio of greater amplitude than the maximum quantising level, engineers decided to define the digital signal reference point as this maximum. The top of the digital meter scale is thus 0dBFS, FS standing for 'full scale'.
As on analogue systems, it makes sense to build in some form of operational headroom to cater for the odd loud peak. This, however, is where all the confusion and problems occur. Since analogue equipment typically provides 18dB or more of headroom, it seems sensible to configure digital systems in the same way. After a little trial and error the Americans adopted a standard of setting the nominal analogue level (+4dBu) to equate with -16dBFS in the digital system, thereby accommodating peaks of up to +20dBu (ie. 0dBu equals -20dBFS). In Europe we have standardised on 0dBu equating to -18dBFS, thereby tolerating peaks of up to +18dBu.
This artificially created headroom provides a reasonable degree of protection against transient overloads, but will generally mean that the average level of material recorded into a digital system will be down around -12dBFS. This is not a problem as far as the quality of the recording is concerned particularly if you are working with a 20- or 24-bit format since the noise floor will still be at least 84dB below the nominal programme level, a figure which is far better than that achieved by any analogue recording system. In effect, operating in this way simply configures the digital system to have similar characteristics and performance to an analogue one.
Building in this kind of allowance for headroom is essential when recording unpredictable material which may well contain unexpected transient peaks of substantial level. However, it is totally unnecessary when working with controlled, post-produced material which has benefited from compression or limiting to tame transient peaks. With peak levels ironed out, the requirement for large amounts of headroom is removed and, following the long tradition of 'louder is better', it makes sense to adjust the overall level of the music to peak as closely as possible to the maximum level 0dBFS.
Typically, as part of the mastering process, music will be 'normalised' to bring the peak levels up to the maximum possible level. Indeed, this is a mandatory process since the 'Red Book' specification for audio CDs insists that material should peak above -4dBFS. It is common practice for pop music to be mastered such that the levels reach full scale (0dBFS) frequently throughout most tracks.
The problem for the project studio is that the vast majority of A-D converters both stand-alone units and those integrated into computer soundcards are aligned to optimise headroom, according to the international standards. As we have seen, a +4dBu analogue input will typically produce a -16dBFS digital signal, which means a well-balanced mix with sensibly controlled peak levels may be consistently reaching the +12dBu (+8VU) mark on the console, but will only achieve peak digital levels of about -8dBFS. Directly recording this to CD or DAT, without normalisation, will result in a very quiet recording compared to commercial CD releases.
Trying to overcome this problem by cranking up the output level from the console rarely helps because of insufficient headroom in the analogue electronics most budget consoles start to sound quite strained with the meters hitting the end of the scale all the time! However, one handy workaround, if your A-D converter is suitably equipped, is to connect the +4dBu output of the console to -10dBV inputs on the A-D converter. The 12dB difference in sensitivity will allow full-scale digital signals to be generated with console outputs of just +8dBu (+4VU).
Time To Realign
If the majority of the digital recordings you make are well-controlled mixes which need minimal headroom, the best solution is to recalibrate the A-D converter's input sensitivity. You can do this properly only if you have an A-D converter which will allow you to adjust its input level (few soundcards will if your A-D can not be adjusted, see the box on Line Amps for an alternative solution), an accurate calibration tone source and a digital meter although it should be possible to achieve excellent results using the meters already provided on the console and converter (or perhaps on a digital recorder with a 'margin indicator'). Connect the tone source to the mixer, the mixer output to the A-D input, and the A-D output to the digital meter or recorder.
You then need to identify the input-level adjustment controls on the A-D converter make sure you know which tweaker corresponds to which channel, and don't muddle up the D-A output adjusters with those for the A-D inputs! Using the a test oscillator or a synth set to generate a constant sine-wave tone of about 1kHz (two octaves above middle C), adjust the console levels to produce an output at your reference level say 0VU on the meters, which should be +4dBu at the output. It is important that exactly the same level is present on both left and right outputs be very careful about the alignment of dual output faders, centre-click pan pots, and even the console meters! Any alignment errors will mean that all your digital recordings will be offset, so investing in or borrowing a good audio meter is a sensible precaution. I use the Terrasonde Audio Toolbox, which is an ideal tool for this kind of job.
At this point, your A-D converter's meters will probably be reading something close to -16dBFS. If you have a recorder with a margin indicator, or some other high-quality metering system, you should be able to confirm the same level on both channels within 0.1dB or so. You now have to decide on the degree of realignment you wish to adopt. If you find your digital recordings are consistently under-recorded by, say, 8dB then you can increase the sensitivity of the A-D converter by that amount. In this example you would tweak the input level controls such that your +4dBu calibration tone reads -8dBFS. Don't go mad with your headroom reduction I would advise leaving a couple of extra dBs of headroom just in case. After all, a digital 'over' is a pretty horrid thing, whereas peaking -2dB below zero is perfectly acceptable and comparable to many commercial CDs.
I have realigned my own location-recording system (a Mackie 1402VLZ mixer and Apogee PSX100 converter) so that a 0VU (+4dBu) signal from the console equates with -12dBFS on the Apogee. This calibration allows a reasonable amount of headroom for the unexpected and means that the yellow +10VU LEDs on the Mackie meters can light occasionally without overloading the converter, while the digital recording typically peaks at around -4dBFS, which I feel comfortable with. It also means the clients can take home a CD-R of the session which sounds similar in level to a commercial disc. During editing and mastering the peak level can be optimised for the final production CDs if required.
Cooling The Replay
The other side of this analogue-digital conversion problem concerns replay levels. A digital signal peaking at 0dBFS, whether from a commercial CD or a normalised track of your own, will generate analogue signal levels of around +20dBu, if the D-A converter is aligned to the conventional +4dBu = -16dBFS standard. That is a stunningly loud signal by anyone's standards, and more than a lot of equipment can tolerate! For example, the maximum level accepted by the tape returns on my little Mackie desk is only +16dBu, although the channel line inputs claim to handle +22dBu.
Line Amplifier Solution
If the calibration of your A-D or D-A converter is not adjustable, one alternative solution is to invest in an adjustable line amplifier many of the professional balanced to semi-pro unbalanced converter units would perform this function well. This is not a particularly cheap option, though, as a decent unit will set you back over £100. However, once installed between the console output and the digital converter, the alignment between the analogue console level and the converted digital levels can be set to suit your requirements, boosting the peak mixer output to reach full scale on the A-D input, and reducing the D-A output to something the console finds easier to handle.
To adjust the outputs of a D-A converter you will require a tweakable D-A unit, a test CD of some kind with 30 seconds or more of calibration tone at a known level, and some form of accurate analogue metering. Connect the digital output of the CD player to the D-A and the output of the D-A to the console or other metering device. (You could calibrate the D-A output level by using the analogue inputs of a CD-R or DAT recorder and observing audio levels on its bar-graph metering or margin display. However, unless the recorder has calibrated input levels you will only be able to adjust relative levels, before and after tweaking, rather than setting a precise reference level.)
But what should the D-A outputs be set to? Well, if you anticipate working with commercially recorded music or other material with abundant full-scale peaks, a commonly used alignment sets a digital test signal of -8dBFS to align with +4dBu or 0VU. If your test disc does not have a tone at -8dBFS, other digital levels can be translated pro rata: -12dBFS = 0dBu = -4VU, 0dBFS = +12dBu = +8VU, and so on.
Alternatively, you might prefer to establish a unity gain path through the A-D and D-A combination so that a given level of analogue signal input to the A-D comes back at exactly the same level from the D-A. This is my preferred option, and my equipment is aligned so that a +4dBu input corresponds to -12dBFS in the digital path, which in turn generates +4dBu at the analogue output. Full-scale digital output signals require and generate +16dBu, which is perfectly manageable, and typical peaks generally sit around the +12dBu, -4dBFS, or +8VU marks.
A Level Playing Field
The established alignment levels of digital systems, with +0dBu equating to -20 or -18dBFS were conceived with the best of intentions basically to endow digital audio equipment with headroom comparable to analogue systems. However, the ubiquity of consistent near-peak levels on commercial digital formats has caused considerable confusion and inconvenience for many users, not just in project and home studios, but in professional circles too. For anyone recording onto CD-R, 20dB of headroom is an unnecessary luxury indeed, it is positively disadvantageous!
It surely doesn't require a brain the size of a planet to realise that digital systems employed in live recording require a totally different alignment with analogue equipment than those used in post-production applications. So why have so few manufacturers addressed this issue with switchable gain structures or adjustable sensitivities? Most professional converters are adjustable, as are many of the better semi-pro units, but as far as I am aware no soundcards with onboard converters are adjustable at all.
All that would be required is an 'operating level' switch to reset the alignment between analogue and digital signal levels from the standard 0dBu = -20 or -18dBFS, to something more like 0dBu = -12 or -10dBFS. Let's hope some manufacturers read this and take note!