It may seem like a stupid question, but what do the specifications 'active' and 'passive' denote for monitors? Does 'powered' mean the same as 'active'?
Technical Editor Hugh Robjohns replies: There are essentially three kinds of speaker powering arrangements — passive, powered and active — although the last two terms are often used interchangeably. While the fact that powered and active monitors plug into the mains and passive monitors do not might seem to be the defining factor, the most important difference indicated by these terms relates to the crossover, which splits the signal into the appropriate frequency ranges before they're sent to the individual drivers.
In passive designs, the monitor contains a set of passive components to split the input signal up into the various frequency bands required for each driver. The high-level input signal required to drive the speaker comes from an external power amplifier. The advantage of this approach is the relatively low cost of production and the ability to mix and match speakers and amplifiers to vary performance characteristics slighty — something which is very popular in hi-fi circles.
In the case of powered monitors, the loudspeaker is exactly the same as above, but rather than using an external amplifier, there is a power amplifier on or in the cabinet. The power amp drives the individual drive units via a passive crossover, as before. The advantage of this design is that you have an all-in-one unit, and the speaker cable is now very short, minimising the losses usually suffered with long speaker leads. The disadvantage is extra weight in the speaker cabinet and the loss of flexibility in not being able to mix and match the amp and speaker.
In active monitors, there are multiple power amplifiers built into the loudspeaker cabinet, one for each driver in the cabinet, and these are connected directly to each driver. The frequency band splitting is performed on the line input signal directly prior to the amplifiers. The advantage of this technique is that the crossover can be more sophisticated and precise than is possible with a passive design. It is also easier to match the amplifier power outputs to each driver, and to include optimised protection circuitry. The disadvantage is that a good active design is expensive, and a bad one is a complete waste of space!
How do you get that double-tracked vocal sound you hear on rap records? Are there any plug-ins I could use?
SOS Forum Post
Reviews Editor Mike Senior replies: Quite simply, you record the same thing twice! Some people compress the lead vocal but not the double-track, which means that accented words sound more prominently double-tracked. Another popular technique is to ride the level of the double-track in the mix, and manually bring it up to accent certain words and phrases. If you want to make a real feature of double-tracking (as opposed to simply thickening the lead vocal), using two vocalists whose voices are quite different in pitch and timbre — male and female, say, or squeaky-and-high and muddy-and-low — is a highly effective technique.
I've not come across any effects processor which can adequately recreate a real double-tracked sound — even the physical-modelling TC Voice One doesn't produce the goods for spoken/rapped vocals as far as I'm concerned. So rather than hunting around for a processor to do the job, it's probably a better use of your time to just knuckle down and re-record the line. If you've recorded multiple takes anyway (for comping purposes) you can usually just use an alternate comp of the lead part, rather than getting the talent in again. Once you get a double-track in there, it smooths out the character of the sound, so the double-track doesn't really have to be quite as word-perfect as the lead — I've found that the 'second-best' take is usually fine as a double-track.
I want to remix some old mono tracks in stereo. Can you offer any advice or suggest any tricks to achieve this?
Technical Editor Hugh Robjohns replies: The first thing to accept is that you cannot create a true stereo (or surround) mix from mono material; you can only give an impression of greater width. In other words, there is nothing you can do to separate instruments and pan them to specific points in the stereo image, as you could if mixed originally for stereo.
One of the best ways to create fake stereo from mono is to make an M&S (Middle and Sides) stereo mix from the mono source. (The subject of M&S techniques was discussed most recently in the Q&A section of SOS August 2003.) You'll need to treat the mono source as the 'M' element of an M&S stereo matrix, and decode accordingly, having created a fake 'S' component.
This fake 'S' signal is simply the original mono signal, high-pass filtered (to avoid the bass frequencies being offset to one side of the stereo image) and delayed by any amount between about 7 and 100ms, according to taste. The longer the delay, the greater the perceived room size — but I would only recommend delays over about 20ms for orchestral or choral music.
Here's how to do it practically: take the mono signal and route it to both outputs on the mixer equally, or, in other words, pan it to the centre. Take an aux output of the mono signal and route it to a digital delay. Ideally, high-pass filter the signal before the delay. A 12dB-per-octave high-pass filter set at about 150Hz should do the job, but this figure isn't critical and will affect the subjective stereo effect, so experiment. Alternatively, high-pass filter the output from the delay.
You now need to derive two outputs from this delayed and filtered signal, which may be possible directly from the delay processor, if it's of the mono in, stereo out variety, for example, with the same delay dialled into both channels. If not, use a splitter cable or parallel strip in a patch bay to produce two outputs.
Route this pair of filtered and delayed signals back to the mixer, ideally into a stereo channel, or, if not, into two mono channels panned hard left and right. Invert the phase of one of the channels. If using adjacent mono channels, fix the faders together and match the input gains so that the gain is the same on both channels.
Now, with the original mono signal faded up, you should hear the central mono output, and if you gradually fade up the fake 'S' channels, you will perceive an increase in stereo width. The length of delay, the turnover frequency of the high-pass filter and the relative level of mono 'M' and fake 'S' channels will determine the perceived stereo width.
If you overdo the amount of 'S' relative to 'M', then you will generate an ultra-wide stereo effect, and if monitored through a Dolby Pro Logic decoder, this will cause a lot of the signal to appear in the rear speakers.
The advantage of this fake stereo technique is that if you subsequently hit the mono button, the fake 'S' signal cancels itself out and disappears completely, to leave the original mono signal unaffected.
In my hardware-based setup, with my TC electronics Triple*C compressor, is it possible to do the kind of limiting on a full mix where you end up with a waveform that is levelled off at the top and bottom, 'brick wall'-style? Also, when recording the co-axial digital output from the Triple*C onto my hi-fi CD recorder, what should the Triple*C's dither setting be if my source is a 24-bit Tascam 788?
SOS Forum Post
Reviews Editor Mike Senior replies: If you're after a waveform which is levelled off at the top and the bottom, then simply clip the output of the processor by cranking up the make-up gain control. To make this slightly less unpleasant on the ear, make sure that the Soft Clip option is on. However, you've got to ask yourself why you're wanting to do this. Although short-term clipping usually doesn't degrade pop music too much, it's really easy to go overboard and do serious damage to your audio if you're not careful. I'd advise doing an un-clipped version as well as the clipped version for safety's sake. You've got to ask yourself just how well your monitoring system compares to the one in a dedicated mastering studio — you should always let your ears be the judge, but remember that your monitors, combined with the room they are in, may not be giving you sufficient information to make an informed decision.
If you're after maximum loudness, then clipping isn't going to get you all the way there in any case. Use the Triple*C's multi-band compressor as well — set an infinity ratio, switch on lookahead, and make the attack time as fast as possible. Adjust the threshold and release time to taste. Make sure that you're aware of what the thresholds of the individual compression bands are doing as well (they're set in the Edit menu), as you might want to limit the different bands with different thresholds. Switch on Soft Clip and set the low level, high level, and make-up gain controls for the desired amount of clipping. Once again, make sure to record an unprocessed version for posterity as well, because you may well overdo things first time, or in case you get access to a dedicated loudness maximiser such as the Waves L2 in the future.
The Triple*C's dithering should be set to 16-bit, because you should set it according to the destination bit-depth, not the source bit-depth. The CD recorder will be 16-bit, so set the dithering to the 16-bit level.
Can you please tell me how to set up the Korg Kaoss Pad KP2 as an ordinary pitch-bender?
SOS Forum Post
Reviews Editor Mike Senior replies: Hold down the Tap/BPM and Rec/Stop buttons at the same time, and after a second or so you'll enter the MIDI editing mode — various buttons will light up and the MIDI channel will be shown in the display. You can at this point change the MIDI channel as necessary using the Program/BPM dial. If you're only wanting to transmit MIDI pitch-bend messages from the Y axis of the pad, then make sure that only the Program Memory 5 button is lit. If you want something transmitted from the X axis as well, then the Program Memory 4 button should also be lit. Pressing any button briefly will toggle its lit/unlit status.
Now to get Y axis movements sending MIDI Pitch-bend messages. Still in MIDI Edit mode, hold down Program Memory 5 until the currently assigned Y-axis controller (by default MIDI Continuous Controller number 13) is displayed in place of the MIDI channel number. Use the Program/BPM dial to bring up 'Pb' on the display. If you're also wanting to set the X axis, press and hold Program Memory 4 until its controller number (by default MIDI Continuous Controller number 12) is shown, and adjust as necessary. Finally, to exit MIDI Edit mode, hold Rec/Stop until you're back in the normal operating state.
A quick bit of general advice too — the unit will automatically leave MIDI Edit mode if you leave it alone for more than 10 seconds, so don't hang around too long when making settings, or you'll be dumped back into the normal operational mode. I find that it's worth toggling a random one of the Program Memory keys on and off occasionally, as the activity keeps the unit in MIDI Edit mode and gives me time to think and consult the manual!
There is another thing to think about when setting the KP2 to transmit pitch-bend information: a normal pitch-bend wheel is sprung so that it resets the pitch whenever you let go of it. Unfortunately, the Hold button doesn't affect MIDI transmission in the same way as it does the response of the internal effects, so the degree of pitch-bend will always stay where it is when you remove your finger from the pad. (Apparently the KP1 doesn't suffer from this problem.) This isn't necessarily a problem, however, because you can effectively do a rough-and-ready pitch-bend 'spring-back' manually, especially if you're able to use both hands: one to pitch-bend and the other to tap the centre of the pad, resetting pitch-bend to zero. If you only have one hand free, you could keep one finger in the centre of the pad while pitch-bending with other fingers. However, the finger that you leave in the centre of the pad will decrease the range over which the rest of the pad operates, so you won't get the same maximum bend range.
If you really need to be able to zero the pitch-bend exactly without sacrificing pitch-bend range, I'd suggest putting a controller button in-line to do this (I'd use one of the ones on my Peavey PC1600X for this) and setting it to generate a 'centred' pitch-bend message. But, to be honest, if you're using the KP2 for subtle pitch changes, it should be adequately accurate to zero the pitch-bend manually. If you're doing mad sweeps the whole time, then it may not even matter if you're not able to zero it perfectly.
However, if you simply have to have mad pitch sweeps along with perfect pitch-bend zeroing, then consider restricting yourself to pitch-bends in only one direction, with the zero point at the top or bottom edge of the pad, so that you can accurately reset the controller manually (finding the middle of the pad accurately is tricky, but finding the edge is easy). To do this for upwards-only bending, set your synth to play an octave higher than you want it (assuming that the bend range will be an octave). This will give you two octaves' shift above whatever note you're playing, with the low edge of the pad representing the former zero-bend position. Reverse the idea for downwards-only shift. If you really want to shift both ways, then you could assign a normal MIDI Continuous Controller (CC) message to the other axis and then use that to control the other pitch-bend direction, assuming that the synth you're triggering allows ordinary controllers also to modulate the pitch — my Korg Prophecy does. You won't get the same controller resolution out of a MIDI CC, so large shifts may sound stepped, but this will at least give you both directions of bend from the pad, and with exact pitch-bend reset.
Having said all of this, there is one other workaround to this problem, which provides all the functionality of a 'sprung' pitch-bend wheel, but it requires that you use a synth with fairly flexible modulation routing. Two of the KP2's transmission types do actually exhibit a 'sprung' action: Modulation Depth One (Y=5-1) and Modulation Depth Two (Y=5-9), activated in MIDI edit mode by Program Memory buttons one and two respectively. Both of these will automatically send their minimum values when you let go of the pad, as if you had moved your finger to the centre of the pad. If you switch both of these types of transmission on in the MIDI edit mode, then the top half of the Y axis will transmit MIDI Continuous Controller number one, and the bottom half will transmit MIDI Continuous Controller number two. The problem is that you can't change the controller assignments for this transmission type, so you'll need to assign the two controllers to upwards and downwards pitch modulation respectively to make it all work. The same caveat concerning controller resolution applies as before, but you do get a true pitch-bend wheel-style action. If your synth won't allow this modulation routing, you may be able to use your sequencer or MIDI controller to convert the MIDI CC messages to MIDI Pitch-bend or Aftertouch messages to achieve the same result.
The dome of the tweeter on one of my M Audio Studiophile BX5 monitors has somehow got pushed inwards. Does this cause permanent damage? If it's advisable, how should I go about pulling it back out? Someone told me to use some sticky tape, but I'm concerned that this could do further damage.
Technical Editor Hugh Robjohns replies: I'm afraid to say that if a soft-dome tweeter has been pushed in, it will already be creased, and that will result in increased distortion compared with an original unit. This structural damage cannot be repaired, although you can improve the looks of the thing by drawing the tweeter dome back out.
Whether you can hear this damage or not depends on several factors, not the least being the quality of the original tweeter. For some, repairing the looks of the tweeter will be sufficient, but the only real way to get the original performance back is to replace the tweeter, and ideally the one in the other speaker too, so that they both will be the same age and have the same specifications. It's best to approach the speaker manufacturer directly for replacements, quoting the serial numbers of your speakers, so that they can check their production records and supply new tweeters of the correct sensitivity. This is often something that varies from batch to batch and so the speaker crossovers have to be tweaked slightly during manufacture to maintain correct overall performance.
If you can't replace the tweeter, then you might be able to restore its shape and some of its original performance. Bear in mind, though, that the coating on the tweeter dome is delicate, and trying to pull it out by sticking tape or Blu-Tac to it is a daft idea in my opinion. Such techniques will either pull off parts of the coating (making the situation even worse) or leave residues behind (ditto).
A better way of restoring the shape of the dome is to use a vacuum cleaner, because it is a non-contact approach. This is what we did to provide a temporary fix for a damaged tweeter in a Studio SOS visit in the October 2002 issue [www.soundonsound.com/sos/Oct02/articles/studiosos1002.asp]. By carefully reducing the air pressure in front of the tweeter using the hose from the vacuum, the pressure of air behind the tweeter forces its shape back out in a fairly gentle way. Obviously, it involves a strong and steady hand to avoid the vacuum cleaner nozzle coming into contact with the tweeter in any way, but, provided that you are careful, this technique can be effective.
I'm having trouble getting my Roland JV1080 to work well with Emagic Logic Audio. I'm using a MOTU Fastlane MIDI interface with a Roland A30 master keyboard, a Yamaha RM1X module and the JV1080. My main problem is that when I change MIDI channel in Logic the sound disappears, or I can't get the sounds that I want to stay put. I am currently recording the JV's audio output into Logic and building it up from there, instead of making use of MIDI. How can I get Logic and the JV to synchronise?
SOS Forum Post
Editor In Chief Paul White replies: I use a Roland JV2080 with Logic, and it's very similar in operation to the JV1080. The first thing to do is create a sequencing Performance in the JV with one Part per MIDI channel. Also ensure that the JV isn't set up so that sending a MIDI Program Change message calls up a new Performance — check your JV manual for details of this. It doesn't matter what sounds you allocate in this Performance, as you will be able to change these from within Logic. Always use this Performance when sequencing.
Next, in Logic, create an Environment Multi Instrument object for the JV, filling in all the port connection details and switching on all 16 MIDI channels. In the patch window, which is opened by clicking on a square on the Multi Instrument, select the correct MIDI Bank Change standard for the JV from the upper right hand menu. You should now be good to go, though you may wish to type or paste in your patch names. Logic's support files include Multi Instruments pre-programmed with patch lists for most of the JV-series banks and cards, and you can use the Copy/Paste All Names function in the patch list window to copy these to your new default Song. Now you can call up all patches by name in the Arrange window.
JV synth users will want to check out our Roland XV & JV Power User Tips series, which continues this month on page 170.
I have noticed that different mixing consoles and multitrackers have different kinds of faders — long- and short-throw, motorised, touch-sensitive, conductive plastic, and so on. Clearly, not all faders are created equal, but what are the essential differences?
Technical Editor Hugh Robjohns replies: On the first sound mixing consoles, up until around the 1950s, faders were actually large rotary knobs, because that was all that the engineering of the day could manage. However, rotary controls are very ergonomic to use — a simple twist of the wrist provides very precise and repeatable settings of gain —but you can only operate two at once because most people have only two hands. The level of the audio signal was changed by altering the electrical resistance through which it had to pass corresponding to the fader position, and this changing resistance was usually acheived by using a chain of carefully selected resistors mounted between studs which were contacted via a moving wiper terminal connected to the rotary control. This arrangement typically provided 0.75dB between stud positions, so that as the control was rotated the gain jumped in 0.75dB steps. This is just below the amount of abrupt level change that most people can detect.
The next stage was the quadrant fader popular through the 1960s and early 1970s. Superficially, this arrangement was much closer to the concept of a fader which we have today, except that the knob on top of the fader arm travels along a curved surface rather than a flat one. You can see two pairs of four quadrant faders in the central section of the EMI REDD 17 desk pictured on the next page. The advantage of this new approach was that the mechanism was quite slim, so that these quadrant faders could be mounted side by side with a fader knob more or less under each finger of each hand. This allowed the operator to maintain instant control of a lot more sources at once. Again, a travelling wiper traversed separate stud contacts with resistors wired between them to create the required changing resistance.
The more familiar slider-type fader we all take for granted today was developed in the 1970s, with the control knob running on parallel rails to provide a true, flat fader. By this time the stud terminal had been replaced in professional circles by a conductive plastic track, which provided far better consistency and a longer life than the cheaper and simpler carbon-deposit track used in cheaper rotary controls and faders. However, both of these mechanisms provided a gradual and continuous change of resistance, rather than the step increments of the stud-type faders.
The mechanism of a slider fader is relatively complex, and economies can be made by using shorter track lengths, hence a lot of budget equipment tends to employ 'short-throw' faders of 60mm or so, rather than the professional standard length of 104mm. Obviously, the longer the fader travel, the greater the precision with which it can be adjusted.
With the introduction of multitrack recording, mixing became increasingly complex and mix automation systems started to emerge in the late 1970s and 80s. Initially, these employed voltage-controlled amplifiers to govern the signal levels of each channel, rather than passing the audio through a fader's track — the fader simply generated the initial control voltage. However, the performance of early VCAs wasn't very good, and motors were eventually added to the faders so that the channel levels could once again be controlled directly by the fader track. Besides the benefits in audio quality, this approach also enabled the engineer to see what the mix automation was doing on the desk itself, rather than just on a computer screen. Conductive knobs were also introduced so that the fader motor control system would know when a fader was being manipulated by hand, and so drop the appropriate channels into automation-write mode while simultaneously disabling the motor drive control so that the fader motors wouldn't 'fight' the manual operation.
When digital mixing consoles were developed, the audio manipulation was performed in a DSP somewhere, so audio no longer passed through the faders. Some systems use essentially analogue faders to generate control voltages — much like the early VCA automation systems — but the control voltages are then translated into a digital number corresponding to the fader position with a simple A-D converter. This fader position number is used as the multiplying factor to control the gain multiplications going on inside the DSP. Some more sophisticated systems employ 'digital faders', many of them using contact-less opto-electronics. A special 'barcode' is etched into the wall of the fader, and an optical reader is fixed below the fader knob so that as the fader is moved, the reader scans the barcode to generate a digital number corresponding to its position, which, in turn, controls the DSP.
Being digital, the faders output a data word, and the length of this word (the number of bits that it is comprised of) determines the resolution with which the fader's physical position can be stated. Essentially, the longer the data word, the greater the number of steps into which the length of the fader's travel can be divided. More subdivisions, in turn, mean more precision in the digital interpretation of the movement of the fader knob. Audio faders are typically engineered with eight-bit resolution, providing 256 levels, but some offer 10-bit resolution, which translates as 1024 different levels. In crude terms, as an audio fader needs to cover a practical range of, say, 100dB, then an eight-bit fader will provide an audio resolution of roughly 0.4dB per increment. In other words, the smallest change of level that can be obtained by moving the fader a tiny amount would be about 0.4dB. A 10-bit fader would give 0.1dB resolution per increment, but these are both well below the typical level change that people can hear. In practice, there is also a degree of interpolation and smoothing performed by the DSP, so the actual level adjustment tends to be even smoother, and 'stepping' is rarely, if ever, audible in modern, well-designed systems.
One other thing worth mentioning at this point is that the fader's resolution — whether it's a digital or analogue fader — changes with fader position. The fader law is logarithmic so that a small physical change of position while around the unity gain mark on the fader (about 75 percent of the way to the top, usually) changes the signal level by a fraction of a dB, whereas the same physical movement towards the bottom of the fader might change the signal level by several dBs. This is why it is important to mix with the faders close to the unity gain mark, since that is where the best resolution and control are to be found.
Going back to the idea of the touch-sensitive fader, which was first developed for fader automation systems, this has also become popular in digital consoles which use assignable controls. By touching a fader, the assignable controls can be allocated to the corresponding channel, obviating the need to press a channel select button and, in theory at least, making the desk more intuitive and quicker to operate. However, if you are in the habit of keeping a hand on one fader while trying to adjust another, this touch-sensitive approach can be a lot more trouble than it is worth. Fortunately, most consoles allow the touch-sensitive fader function to be disabled in the console configuration parameters.