You are here

Q & A April 2002

Your Technical Questions Answered
By Various

Phase shown on waveforms.

Q. Why is phase important?

I'm an experienced musician who's just beginning to understand recording techniques and acoustic treatment. On page 160 of your December 2001 issue there was a box called "Absolute Phase Is Important", and this has prompted a few questions.

How does one reverse the phase of a microphone as suggested in the article? Is this done by resoldering connections, or are there any quality mics you can suggest that come with a phase-reverse switch? Phase testers were also mentioned. What are they? How do they work? Where can they be purchased?

Many of us use a phaser effect, and I assume that phasers basically alter the phase of a sound source over a given time period. Perhaps you could elaborate on how they work?

Finally, I use a PC to store all my recorded audio information as WAV files. Is it possible to uniformly alter the phase of a piece of digitally recorded audio so it stays static at the same place? I have been trying in Sound Forge, but haven't found a way to successfully do it yet.

Jonathan Sammeroff

Paul White and Sam Inglis respond: As you probably know, sound consists of pressure waves in the atmosphere. The function of a microphone is to translate these pressure waves into changes in the voltage of an electrical signal. Absolute phase usually refers to instrument miking where a positive increase in air pressure translates to a positive increase in voltage at the microphone output. If the mic is wired out of phase, or some other phase inversion is introduced, the output voltage will go negative as the air pressure becomes positive, and in the context of some percussive sounds, such as kick drums, there can be an audible difference. Also, if you have two very similar signals (such as are obtained by close-miking the same source with two different microphones) which happen to be out of phase, a lot of cancellation will occur, and this is usually undesirable. The classic case is when you close-mic both the top and bottom of a snare drum: here, you will get two very similar signals, but one will effectively be phase-reversed with respect to the other, so it's standard practice to reverse the phase on the bottom microphone.

Any balanced mic can be reversed in phase by making up a cable with the hot and cold conductors (the two inner wires) swapped over at one end of the lead. Most mixers, and some mic preamps, also have mic phase invert buttons that will switch the phase without requiring any special cables. Most cable testers will check that your leads are wired correctly (without crossed over hot and cold wires that would cause a phase reversal), but devices that can check acoustic phase from microphone to loudspeaker tend to be more complex and rather more expensive. Check with Canford Audio (, as they carry this type of test equipment.

A phaser effect combines a signal with a delayed version of itself, using a low-frequency oscillator to modulate the delay time. As the length of the delay time is varied, cancellation occurs at different frequencies, and the result is a type of notch filter where the notch frequency is constantly moving, introducing a sense of movement into the sound.

You can't uniformly alter the phase of a whole piece of music, as phase relates to frequency, so unless your music comprises a single tone, adding delay (which is how some phaser effects work) will cause some frequencies to add and others to cancel. However, I suspect this is the effect you want, in which case you can get it by copying the audio to be treated, then moving it slightly ahead or behind the original audio, usually by just a few milliseconds. The two parts summed together will exhibit the static phase effect you describe.

Q. What orchestral sample libraries are available for HALion?

I recently bought Steinberg's software sampler HALion to replace Cubase's good (but limited) Universal Sound Module for orchestral work. However, I'm having difficulty sourcing suitable samples for this, and wondered if you could recommend some alternatives in the price region of £100 to £300, and how I might be able to hear them before I buy?

Bill Taylor

Orchestra Section Strings sample library.Assistant Editor Mark Wherry replies: HALion has the ability to import a wide variety of sample formats, including Akai and Emu CD-ROMs, and SoundFonts — so any of the orchestral libraries in these formats should work without a problem. GigaStudio is a highly regarded platform for sample-based orchestral work and has many fine libraries available, though most are priced at the higher end of the market. Giga libraries can be imported into HALion (from version 1.1), although, as Giga import isn't 100 percent accurate right now, you might be best sticking to more conventional libraries that place less demand on your computer.

Q & A Advanced Orchestra CD artwork.As for library recommendations, for orchestral work on a budget you could do worse than Emu's two downloadable volumes of Orchestral SoundFont banks, available for $39.95 each at These provide a good starting point and are rumoured to be based on the same sound library as Emu's Virtuoso 2000 module. There are also many good Orchestral Implants SoundFont libraries available from Sonic Implants (, and these are also reasonably priced.

If you want something more professional, both Peter Siedlaczek and Miroslav Vitous offer junior versions of their larger (and more expensive) orchestral libraries as Akai CD-ROMs. At £99, Peter Siedlaczek's Advanced Orchestra Compact might be a bargain, though many people regard it as being a little too compact. But at the upper limit of your budget, Miroslav Vitous' Mini Library offers a good selection of high-quality bread-and-butter sounds for £299, which is around a tenth of the price of the full library. Both are available from Time & Space (

Hearing libraries before you buy can be tricky, but all of those mentioned here have MP3 demo songs on their respective web sites.

Q. Where can I get Windows-based ASIO drivers for an Audiomedia III card?

Digidesign Audiomedia III card.Digidesign Audiomedia III card.I recently bought a Digidesign Audiomedia III card second-hand from your magazine's Readers' Ads, and it didn't come with any ASIO drivers for my Windows-based PC. The card was highly recommended by a friend of mine (combined with Pro Tools), and the sound quality is very good. However, I've got a problem with latency and can't find any ASIO drivers to bring this down to acceptable levels. With Cubase 3.7 I can only use the default settings of 750mS (ASIO DirectX/ASIO Multimedia), which is far too high for any serious recording. How can I reduce the latency (apparently the AMIII is capable of latencies less than 5mS), and what ASIO driver should I install to use the card's full potential?

Another problem I'm experiencing is that when recording electric guitar through the soundcard into Pro Tools, I get nasty digital clicks at the beginning of every recording. Is there any way I can eliminate these? I'm not very experienced in setting up studios, and everything I read regarding Pro Tools and the Audiomedia III card seems to be written for the Mac platform, and not for PCs.

Bernd Krueper

PC Music specialist Martin Walker replies: The Audiomedia III is now quite elderly as soundcards go, having been introduced by Digidesign in 1996, and features 18-bit converters, although internally it has a 24-bit data path. I've only mentioned it once in the pages of SOS, in my first ever (May 1997) PC Notes column, where I published details of a way to cure inexplicable clicks by disabling PCI Burst Mode in your motherboard BIOS, should this setting be available. At the time, Digidesign were finalising a chip upgrade addressing the problem, so hopefully you have one of the later cards with this modification. Digidesign mention various other known incompatibilities on their web site, including AMD processors, VIA chipsets, and various Hewlett Packard PCs, which isn't encouraging.

I eventually found the latest Wave (MME) drivers on Digidesign's web site including version 1.7 for Windows 98/ME, dated January 2001, which supports 16 and 24-bit recording and playback at sample rates of up to 48kHz, in addition to other drivers for Windows NT, 2000, and even the announcement of an XP beta test program to support the AMIII cards. Various cures for crackling during playback were implemented in driver development, so make sure you have the latest versions. These will still give high latency, although you may be able to tweak the ASIO Multimedia settings inside Cubase 3.7 to bring the default 750mS down a little.

However, there was absolutely no mention of ASIO drivers, and Digidesign UK subsequently confirmed that none were ever written by them, or are now likely to be. Because the AMIII was released pre-ASIO, Digidesign developed the DAE (Digidesign Audio Engine) and relied on the sequencer developers to add support for it. Apparently, Steinberg did originally write an ASIO driver that supported this, and Emagic supported the DAE in Logic Audio up to version 3.5 on the PC, but since the DAE apparently wasn't updated by Digidesign to support Windows 98, support was dropped in Logic version 4.

So, sadly, although the card might be capable of latencies down to 5mS, you won't find any modern audio application that can use anything other than the high-latency MME drivers. This is a cautionary tale for any musician buying a soundcard, and particularly a second-hand one, so make your decision based on what drivers you can confirm are available to save yourself regrets later on.

Q. Are there really reverb and synth plug-ins supplied with Mac OS X?

I'm running Mac OS 10.1.2 and use SparkME, but there's no sign of the reverb and synth plug-ins anywhere. What's going on?

Arum Devereux

Assistant Editor Mark Wherry replies: The short answer is yes, there's a reverb and a synthesizer supplied with Mac OS X. The slightly longer answer is that developers have to provide support in their applications to take advantage of these features. And, since the MIDI and audio APIs (Application Programming Interfaces), collectively known as the Core Audio services, are some of the newest elements of Mac OS X, it's going to take a while for developers to fully support them.

The Core Audio services provide a plug-in architecture known as Audio Units, which isn't a million miles away from DirectX plug-ins on Windows. Audio Units can be used for a variety of applications, including software effects and instruments, and indeed, the reverb and DLS/SoundFont player instrument Apple supply with Mac OS X are both Audio Units.

The advantage of Audio Units, like DirectX plug-ins, is that any musical application running on Mac OS X can use the same pool of global plug-ins if it was developed to support Audio Units. This saves developers having to develop their plug-ins to support multiple architectures like VST, MAS, RTAS, and so on.

Q. How can I isolate the vocals from a stereo mix?

Do you know of any software or hardware that can remove a vocal from a track but allow you to save the vocal? There are numerous software packages that remove vocals from a track, but those are the parts I want.

Simon Astbury

Senior Assistant Editor Matt Bell replies: This question and variants on it come up time and time again here at SOS, and also on music technology discussion forums all over the Internet, presumably because budding remixers are forever coming to the conclusion that it would be great if there were a way of treating the finished stereo mixes of songs on CD and coming up with the isolated constituents of the original multitrack, thus making remixing a doddle. The situation is further complicated by the ready availability of various hardware and software 'vocal removers' or 'vocal cancellers', which leads people to assume that if you can remove the vocal from a track, there must be some easy way of doing the opposite, ie. removing the backing track and keeping the vocals.

Sadly, the truth is that there's no easy way to do this. To understand why not, it's helpful to learn how vocal cancellation — itself a very hit-and-miss technology — works. Believe it or not (given that so much of this month's Q&A is already given over to the topic) it's all to do with signal phase!

A stereo signal consists of two channels, left and right, and most finished stereo mixes contain various signals, mixed so they are present in different proportions in both channels. A percussion part panned hard left in the final mix, for example, will be present 100 percent in the left channel and not at all in the right. A guitar overdub panned right (but not hard right) will be present in both channels, but at a higher level in the right channel than it is in the left. And a lead vocal, which most producers these days pan dead centre, will be equally present in both channels. When we listen to the left and right signals together from CD, the spread of signal proportions in both channels produces a result which sounds to us as though the different instruments are playing from different places in the stereo sound stage.

If you place one of the channels in a stereo mix out of phase (ie. reverse the polarity of the signal) and add it to the other channel, anything present equally in both channels (ie. panned centrally) will cancel out — a technique sometimes known as phase cancellation. You can try this for yourself if you have a mixer anywhere which offers a phase-reversal function on each channel (many large analogue mixers have this facility, as do some modern software sequencers and most recent stand-alone digital multitrackers such as Roland's popular VS-series, although the software phase switch on the Roland VS1680 and 1880 doesn't exactly advertise its presence — see pic, right). Simply pan both the left and right signals to dead centre (thus adding them on top of one another), and reverse the phase of one of them — it doesn't matter which. The resulting mono signal will lack all the items that were panned centrally in the original mix. Sometimes, the results can be dramatic. Old recordings from the early days of stereo sometimes featured the rhythm section panned dead centre and overdubs (vocals, say, or guitar or keyboard) panned off-centre. In these cases the vocals or guitar will remain following phase cancellation, and the drums and bass will disappear completely, allowing you to appreciate details you never knew were there in the parts that remain. In recent recordings, the tendency has usually been for lead vocals to be panned centrally, so with these recordings, it's the lead that will cancel from the mix, leaving (in theory) the backing. This is how most vocal-cancellation techniques work.

So, doesn't this mean that the success (or failure) of vocal cancelling depends on whether or not the original vocal was panned centrally? Well, yes — which is why vocal cancelling is such a hit-and-miss technique! What's more, although most vocals are panned centrally in today's stereo productions, backing vocals are often panned off-centre, and will therefore not cancel with the lead vocal. Furthermore, nearly all lead vocals in modern productions have some effects applied to them. If these are stereo effects and therefore present unequally in both channels (as is the case in a stereo reverb), the dry signal may cancel, but the processed signal will not, leaving a 'reverb shadow' of the lead vocal in the phase-cancelled signal. No matter how much you pay for vocal-cancelling software or hardware, there's nothing that can be done if the original vocal was not mixed in such a way as to allow complete cancellation.

In addition, although you can cancel anything panned centrally in this way, you can't isolate what you've cancelled to the exclusion of everything else. Many people, when learning of phase-cancelling techniques, assume that if you can cancel, say, a vocal from a mix, then if you take the resulting vocal-less signal and reverse the phase of that and add it back to the original stereo mix, the backing will cancel and leave you with the vocal. This is hardly ever workable in practice, however, because a phase-cancelled signal is always mono, and if the original backing mix is in stereo (as it nearly always is), you can never get the phase-cancelled mono backing on top of the stereo mix in the right proportions to completely cancel it out.

Another suggestion that is often made when encountering phase-cancelling techniques is that of dividing a stereo mix into its component sum and diference signals, which you can do with a Mid and Side matrix. However, isolating the 'Mid' component of any given stereo mix won't merely give you anything that was panned centrally in the original mix to the exclusion of everything else — it's simply the mono signal obtained by panning Left and Right signals to centre and reducing the overall level by 3dB. So, if an original mix consists of a centre-panned lead vocal and an off-centre guitar overdub, the Mid signal constituent of the mix is not the isolated lead vocal, but a mono signal with the vocal at one level, and the guitar at a slightly lower level. You may be able to emphasise the vocal at the expense of the guitar with EQ, but you'll never remove the guitar altogether. In a busy mix with several instruments playing at once, deriving the Mid component of a stereo mix won't get you very much nearer to an isolated vocal than you are with the source stereo mix!

Despite this, it's worth pointing out that phase-cancellation techniques can be fascinating for listening to the component parts of mixes, and useful for analysing tracks you admire or are trying to learn to play. If you pan left and right channels to centre, reverse the phase of one of the channels and play around with the level of the phase-reversed channel, different parts of the mix will drop out as differently panned instruments cancel at different settings. Sometimes the relative volume of one component can shift very slightly, but enough to lend a whole new sound to a mix, enabling you to hear parts that have never seemed distinct before. An example might be if a song contains a blistering, overdriven mono guitar sound panned off-centre, which normally swamps much of the rest of the track when you play it back in ordinary stereo. With the faders set unequally, and one channel phase-reversed such that the guitar cancels out, you will hear most of the other constituents of the mix, but minus the guitar, which could make the track sound very different!

However, as a technique for isolating parts from a stereo mix, phase cancellation remains very imprecise, its success or failure dependent entirely on how the original track was mixed. This doesn't mean that it's not worth a try, but it also means that the only sure-fire way to obtain the isolated vocals from a track is to obtain a copy of the original multitrack from the artist or record company — which is, of course, what professional remixers do. Sadly, this is not an option for most of us!

Published April 2002