You are here

Q & A

Your technical questions and queries answered

Q

How do I connect a SCSI drive to my VS multitracker?

Roland VS880.I have a Roland VS880 V-Xpanded Version 2.06 and I'm trying to hook up a Syjet 1.5GB drive to backup data. Despite a number of tries I haven't been able to get it to work. Can you tell me what I might be doing wrong?

Reviews Editor Mike Senior replies: It's impossible to diagnose the exact nature of your problem without further information, but here's a list of possible causes to investigate. Firstly, are you sure that the Syjet actually works properly? Syquest drives have a bit of reputation for being unreliable, so it might just be that the drive is faulty. Try hooking it up to a computer to check that it's still working. Secondly, how have you connected up the drive? You should check that you're actually using a proper SCSI cable, as there are other cable types with 25-pin 'D'-Sub connectors which aren't suitable for interfacing SCSI devices. Next, what are the SCSI IDs of the VS and the Syjet? You may have a conflict. You'll probably be able to set the Syjet's SCSI ID via a switch on the drive casing. The VS880's SCSI ID can be set from the Disk PRM submenu under the System menu; the option you need is called SCSI Self. The important thing is to make sure the Syjet and the VS are set to different SCSI ID numbers. If there any other devices on the SCSI chain, try removing them while you sort out the Syjet. If there are two SCSI sockets on the Syjet, then make sure termination is switched on. If there is no termination switch, then buy a SCSI termination (from a computer supplies shop) suitable for the SCSI plug type the Syjet uses, plugging it into the spare port. You may need to experiment with which way round you plug up the two Syjet ports as well. Finally, you may need to format the drive specifically for use by the VS880. I'm not sure that drives formatted in normal computer formats are acceptable.

Q

What's the difference between Humanise and Groove Quantise?

My sequencer offers both Humanise and Groove Quantise functions. Are they different things? Do I need to use both or just one?

Eddie Barton

SOS contributor Len Sasso replies: Generally you would use one or the other as they are somewhat at cross purposes. Humanising usually refers to making slight, random adjustments to the timing of notes. Often there are options to also randomise Velocity and note length. The idea is that you want the adjustments to be barely noticeable — just enough to remove the rigid quantised feel of step-entered sequences.

If you have sequencing software that offers both step-entry of notes and humanising, try step entering simple drum and bass parts. Listen to the results then try humanising one of the parts, and finally, try humanising both. See which you prefer — rigidly quantised parts fit some musical forms better while humanised parts fit others.

Groove quantising, on the other hand, is the process of matching the timing of one sequence to that of another. The idea here is that you pick a part (typically bass or basic drums) that has the feel or 'groove' you want, then align other parts to the same groove. The groove might be played in or might come from commercial groove templates, such as the DNA groove templates available from Numerical Sound (www.numericalsound.com).

Groove quantising is just another form of quantising, in which the quantise 'grid' is supplied by a human player rather than being composed of rigid note divisions. As such, it is already humanised and randomising it would tend to destroy the effect. But most sequencing software will allow you to control the degree of quantisation (rather than snapping everything exactly to the quantise grid) as well as to quantise Velocity and note length. That provides a further degree of humanisation in that all parts are not exactly the same. If your software doesn't offer those features, humanising after groove quantising might be useful.

If you have sequencing software that offers groove quantising and supports REX files or some other beat-slicing format, you have a ready source of grooves to try. Simply take the MIDI files that reflect the timing of the slices and use those for your groove-quantise grids.

Q

How can I get rid of a standing wave?

I want to sort out the standing wave in my control room, but I'm not quite sure what to do. The room is four metres square, with walls sloping away from each other on two opposite sides. My setup is along one of these walls, just off centre, with the monitors sitting about five feet apart. I've isolated my speakers as much as possible so they are away from my PC monitors and standing on little metal feet. I do need to tip them forward slightly so I am in the best monitoring position but when I do move into that position there's little difference. The wave's fundamental is at 44Hz, but where I sit to mix I hear it most at 88Hz. Frequencies above this aren't really affected. Besides double tacking the room, which isn't really feasible at the moment, what do you suggest I do to solve this problem?

SOS Forum Post

Editor In Chief Paul White replies: As I understand your description, your room is roughly square-shaped, which is clearly bad news for standing waves, and I'd guess that the height of your ceiling isn't far off half the wall length. Sloping walls have very little effect on low-frequency problems but dense foam corner bass traps should help even out the bass response while three-inch foam panels either side of where you sit will cut down flutter echoes.

The other strategy for small rooms is to choose monitors that don't have too great a bass extension — monitors that roll off at around 60 to 70Hz will generally give better results than those that go right down to 40Hz. You may also find that moving the monitors by as little as six inches will affect the way the bass end behaves, so it's worth experimenting with this while playing back a sequence of equal-intensity bass lines. If you can, it would also help to put some foam on the front and back walls, but only use one to two square metres in total on each wall as too much foam will soak up all the high end and allow the low- and mid-frequencies to dominate.

Q

What's the difference between pan and balance?

Most of the audio recording and playback software I use has a pan control for stereo audio channels. Shouldn't that be called 'balance' and isn't it different from panning?

SOS Forum Post

SOS contributor Len Sasso replies: Yes, technically mono channels should have pan controls and stereo channels should have balance controls. But most audio software will play both mono and stereo audio files on any track and even allow you to mix the two on the same track. The software is smart enough to perform the proper function based on the data being played. Panning distributes a mono signal between the left and right output channels. Balance simultaneously alters the levels (in opposite directions) of the two channels of a stereo signal, but the left and right channels go exclusively to the left and right outputs, respectively.

To the extent that the two channels of a stereo signal share the same information, balance has a similar effect to panning — the shared information appears to shift from one channel to the other. To the extent that they have different information, balance acts to suppress the information on one side and enhance it on the other. As an unlikely example, if you have a guitar panned hard left, a singer in the middle, and a bass panned hard right in a stereo file, the balance control will pan the singer while controlling the levels of the guitar and bass.

Q

Can I record control voltages on my sampler?

If analogue modular synths utilise control voltages, and a digital sampler converts incoming voltage levels into digital data and then back again, could you record a control voltage signal into a sampler, then play it back from the sampler into a modular's CV input, and get the same result as the pure CV signal?

James Kennedy

Editor In Chief Paul White replies: In theory, yes, but the capacitor coupling used in audio equipment, including samplers, precludes the passing of DC voltages, so while modulation waveforms may work, slow envelopes and pitch information would almost certainly not. Most audio gear rolls off any frequencies below 20 or 30Hz, so even slow LFOs would suffer. Also, you'd need to calibrate the output gain control on your sampler very carefully so that you get back exactly the same level of control waveform as the one you put in.

Q

Which PC and monitors should I buy?

I am interested in buying a music PC and have read the SOS reviews of both the Carillon and Digital Village PCs. Although impressed by both, I'm still rather undecided, so could you please tell me which one would be more suitable for my needs. I have been writing my own rock, pop, and dance music using the Yamaha AW4416, but would like the flexibility a music PC would bring to my home studio. As I live in Northern Ireland and only have limited computer experience, I'm also concerned about technical support if any problems should arise. I also want to buy a good monitoring system on a budget of eight hundred pounds such as the KRK V4s or the Mackie HR624s. Would these be easy to set up with my new computer?

Ciaran Doherty

Editor In Chief Paul White replies: Both PCs should do the job so I'd suggest buying whichever you feel is quietest, especially if you plan to record in the same room as your PC, as I expect you would with the AW4416. Read over the reviews again and see which one seems to best suit your needs. Providing you buy the computer with the software and soundcard of your choice already installed and configured, and you don't go adding any software other than music plug-ins, you shouldn't run into any technical difficulties that can't be fixed with a phone call to tech support. Having said that, it's always helpful to get to know a local PC user just in case.

As to monitors, the Mackie HR624s would work well with either system, as would the KRKs, but if you are planning to use any active monitors without a hardware mixer, a monitor control box such as the inexpensive Samson C Control would be useful to control the monitoring level, and would also provide you with a headphone feed for overdubbing. Personally, I'm a fan of the Mackie monitors, but don't let that put you off trying out the KRKs too, as these are also very good. Also, check out any Dynaudio models in your price range as well as some of the less costly active monitors from other manufacturers, such as the Alesis M1 MkII Actives.

Q

What's wrong with my AKG valve mic?

My AKG SolidTube has been playing up for a couple of years and shows the following symptoms. When I power up, I get no signal, or occasionally a really quiet, really distorted one. When I shout into it (loudly), it suddenly 'unblocks' itself and I get the full signal. From that point onward, the mic seems to work fine, although during one project it didn't seem to have completely unblocked itself and I had to go back and shout down it again. I've just taken it to my local shop and their engineer reckoned there was a dry joint on the valve base which he's re-soldered and on testing he said the problem had gone. I've just tried it back at home and sadly the problem is still there. I realise it's difficult to comment without seeing the mic, but do you have any idea what the problem is? Once it gets going, it works fine, and it seems odd that it should behave this way.

Simon Lees

Editor In Chief Paul White replies: Unless this is a hardware fault (in which case the mic needs to go back to Arbiter, the distributor, to be fixed), all I can think of is that you have condensation on the diaphragm. Capacitor mics can be very susceptible to picking up condensation, especially when they are in a cold or humid environment, so storing them in a warm dry place helps. Using a pop shield should also help keep the moisture in the singer's breath from reaching the capsule. To test the hypothesis that condensation is to blame, next time the fault occurs, place the mic somewhere warm for half an hour and see if the problem goes away.

Q

Can I use multiple ASIO drivers in Nuendo?

Is it possible to use multiple ASIO drivers at the same time in Nuendo?

SOS Forum Post

Reviews Editor Mark Wherry replies: In a word, no — not even the new architecture in Nuendo 2 supports this. However, some manufacturers' ASIO drivers do allow for multiple hardware devices, so a single ASIO driver can service two RME cards (of the same type), a variety of Creamware boards, several MOTU devices (of the same type), and so on. But you can't mix and match different devices from different manufacturers in a single ASIO driver.

Q

What compressor settings should I use for the spoken word?

When recording speech, the right compressor settings are just as important as good diction, posture and a neat and tidy haircut.When recording speech, the right compressor settings are just as important as good diction, posture and a neat and tidy haircut.I need to set up a compressor for a spoken word recording. Can you offer any advice?

SOS Forum Post

Technical Editor Hugh Robjohns replies:You'll need to select a compression ratio that isn't too severe and set the threshold so that the compressor applies the appropriate amount of dynamic reduction to control the signal without sounding obvious. Squashing voices too heavily is extremely unpleasant and immediately noticeable.

Attack time is never too critical on voices, but release time is. If the release time is set too slow, a loud word will punch a hole in the following speech. Set it too fast and you'll hear pumping.

I'm not going to give any figures for these settings here. Instead, experiment and use your ears. You will quickly understand what all the controls do and how best to use them. If it sounds right, it is right! You might also want to look at an article I wrote a few years ago on recording speech, which is available at www.sound-on-sound.com/sos/1997_articles/jan97/spokenword.html.

Q

What's the best way to mike up a choir?

I need to reinforce the sound of a performance by a 40-voice childrens' choir, some of which will be accompanied by electric guitar, drums and a funk/blues brass section (played by kids, hence incapable of playing softly). I've done similar things before where I put a couple of basic dynamic mics in front of the choir, but I found I couldn't really get the choir loud enough without getting feedback.

Please could you recommend some fairly cheap mics that would do the job and tell me the best way to set them up. The choir stand in four rows of 10 across, with the brass usually in a line behind them, though I suppose it would make more sense to put the brass to one side to avoid picking them up with the mics.

SOS Forum Post

Technical Editor Hugh Robjohns replies: Rather than blowing your budget on mics that will only be used once or twice a year, why not hire some more appropriate kit for the job. That also means the money comes from a different budget than capital purchases! You've already mentioned the most important thing — you need to move the brass away from behind the kids. Send them over to the other side of the stage! Apart from anything else, there could be a noise safety issue with brass players blasting away in the delicate ears of the youth choir. By moving the brass to the side, their contributions will be coming in to the side of the mics directed at the choir, where you'll encounter some rejection, rather than straight down their main axis.

Putting the kids on risers is also a good idea, as is making them look upwards, and the brass section downwards! You could, for example, set the choir in the middle of the stage, with the brass to one side and the drums and bass to the other. You can then use a bunch of cardioid mics close to and in the choir, let's say two or three mics across the front, and maybe another two in front of the third and fourth rows. Alternatively, you could achieve a similar coverage with just a couple of short rifle (or shotgun) mics, which have a far narrower pickup pattern, so could be mounted further back.

Q & AIt might help if you try to think of mic polar patterns in the light of everyday three-dimensional objects. A cardioid mic's pickup pattern is a little like an apple, with the stalk pointing to the back of the mic. The more gain you wind into the mic amplifier the bigger the apple gets. So imagine that you've put a single mic up in front of your choir, and that you've given it enough gain to make the apple grow until it encompasses all the singers. By the time the frontal area of the mic has covered the choir, the sides are extending a long way out too, hence possibly picking up a lot of spill from other sound sources in those directions. If instead you use two or three mics, and/or move the mic or mics closer to the choir, you can see (in your mind's eye, that is) that you won't need as much gain in each mic before their apples have collectively encompassed the choir. There will therefore be less of a problem with spill, although you will now have the problem of the same sounds arriving at several spaced microphones, potentially causing phasing problems. The way to avoid that is to make sure that the distance between mics is at least three times greater than the distance between each mic and its source.

Rifle mics have a polar pattern that looks a mess (see diagram, left), with lots of narrow side lobes that look a bit like a squashed spider! However, for practical purposes you can generally think of its pattern as being something more like a lightbulb. Although the frontal pickup is relatively narrow compared with a cardioid mic's more rounded pattern, it has much greater side rejection, so you can use it from further away without capturing so much spill. Because it's further back, the mic can also see more of the choir. Two or three mics should be enough for this kind of job. At low frequencies, below about 500Hz, the rifle mic's response pattern becomes very similar to a hypercardioid, which is why I recommended filtering off the LF when using this kind of mic in a PA application.

Q

Why are the waveforms of the miked and direct signals different?

Ralf Lehmann's waveforms exhibit discrepancies in phase and polarity.Ralf Lehmann's waveforms exhibit discrepancies in phase and polarity.I've recorded some acoustic guitar using both a mic and the guitar's internal pickup simultaneously, but the waveforms of the mic and pickup tracks do not look similar at all (see screenshot, right). Can you offer any explanation for this?

There's obviously some kind of delay between what seem to be the same peaks in the two tracks (around 220 samples). Is this just because current through cables travels faster than sound through air? Do I have to align the tracks and match up the peaks?

Ralf Lehmann

Reviews Editor Mike Senior replies: Firstly, yes, sound does travel slower through air than along cables. If the mic is a fair way away from the DI'd sound source, discrepancies in timing can be introduced. When mixing DI and miked signals, some engineers would argue that you should match the timing of the two signals to get the best results, but I'd say that this is simply a question of taste. Looking at your screengrab, it looks like you also have a polarity reversal between the mic and DI signals. Again, some would argue that you should therefore flip one of them to match the other, but I'd say that you should just trust your ears — if you match up the timing first, then matching the polarity will matter more, I'd have thought.

The basic principle is that a phase difference (time delay) or polarity inversion (vertically flipped waveform) between two similar signals has the potential to cause peaks and troughs in the combined frequency spectrum. Whether you happen to like a specific set of frequency fluctuations is up to you. The ability to play with phase for effect is one of the most creative and under-rated aspects of 'real' recording, so do make some time to experiment here. If you're trying to minimise frequency-cancellation effects, then it would be best to match the phase and polarity as closely as you can. Remember, however, that your guitarist may be moving around in relation to the mic, so it probably won't be possible to match the phase of the two signals exactly, as the degree of phase difference won't be exactly the same throughout the track. That said, it should be good enough to avoid any undesirable bass ripples which are the most noticeable side-effects of phase mismatch.

Q

When should I use an effect as an insert, and when as a send?

Are there general rules about when it's better to use an effect as an insert and when it's better to use it as a send effect?

SOS Forum Post

SOS contributor Len Sasso replies: It's probably fair to say that any type of effect you can think of can and has been used both ways. But, there are good reasons to choose one method or the other in specific contexts. Effects such as reverb, when used to create an overall ambience — say that of a hall or shower stall — are typically placed as send effects to affect the mix of all audio channels. Effects such as EQ and compression, which are often used to enhance a specific track, are generally used as insert effects. On the other hand, it's not uncommon to insert a reverb effect to create an ambience for a specific instrument, or to place EQ or compression before or after a send effect.

One thing to keep in mind is that send effects don't necessarily need be applied to a mix of channels. You might dedicate a pre-fader send to a single channel for an effect whose level you want to automate separately from the unprocessed signal. Although you could accomplish something similar with the send-level or wet/dry-mix controls, because you're controlling the level of the effect at different points in the signal path, one method might be preferable to the other. For example, varying the input to a delay line produces a very different effect to varying its output.

Another case where you might want to use a send effect is to control the wet/dry mix for effects without mix controls. Typically EQ and dynamic effects don't have a mix control because you only want the processed signal. Should you want the special effect of a wet/dry mix, you can achieve it by placing them as send effects.

Finally, keep in mind that you're not limited to inserting effects into individual channels. For example, in finalising applications, compression, EQ, and limiting are used as insert effects on the master output. As with all signal processing, use what works.

Q

Can I get 3D spatial effects from a plug-in?

Is there a plug-in that reproduces the psychoacoustic spatial effect from the Roland RSS boxes, that gives the impression of surround sound on two speakers and headphones ?

SOS Forum Post

Reviews Editor Mike Senior replies: The first thing to get clear is that you can't have 3D spatial effects from a stereo system which work on both headphones and speakers. Without going into too much technical detail, getting 3D effects from speakers requires out-of-phase signals to be added to opposite speakers to cancel crosstalk, whereas headphones don't need that because they encounter negligible crosstalk. So you can either have your 3D effects accurately presented on speakers or on headphones, but not on both. Roland's earliest RSS units were designed for speakers only, but the output of later models such as the RSS10 (pictured) can be switched between speakers and headphones.

There are numerous software plug-ins available which produce 3D effects, such as Prosoniq's Ambisone (www.prosoniq.com), Steinberg's Free-D (www.steinberg.com), and Wave Arts' Wave Surround (www.wavearts.com) to name but three.

As an aside, I'm not convinced that the RSS techniques are awfully effective unless you're listening in an unfeasibly ideal nearfield monitoring environment, which almost everyone isn't. On the other hand, the RSS Chorus effect in the Roland machines is still pretty cool — it's just as nice on headphones as speakers, although different in each case.

Q

What's the difference between velocity and volume?

Could you explain the difference between MIDI Velocity and MIDI Volume. I'd like to have a Velocity fader to do Velocity automation, but none of my synths or software have that.

Chandra Murphy

SOS contributor Len Sasso replies: The reason you rarely see Velocity controls is that, unlike volume and other MIDI controller information, Velocity is attached to individual notes. Each MIDI note has a Velocity, which is meant to reflect how hard it was played. It's called Velocity because synthesizer manufacturers discovered early on that it was much easier to detect the velocity with which a key was descending than the pressure with which it was hit. Pressure sensitivity came along much later in the form of Aftertouch and Poly Pressure.

Inexpensive MIDI keyboards that are not Velocity sensitive do sometimes have a knob or slider for setting Velocity. In addition, most programmable hardware or software synths and samplers have Velocity scaling and offset settings for some or all the parameters to which Velocity can be applied. If you have software that offers some degree of MIDI processing, you can probably set up your own faders to scale and offset incoming note Velocity before it is routed routed to be recorded or to play your synth. Scaling refers to multiplying each note's Velocity by the same amount — halving each note's Velocity, for example. Offsetting refers to adding or subtracting a fixed amount from each note's Velocity. Since scaling changes both the average Velocity and the Velocity range, it's common to use an offset after scaling to preserve the original average Velocity, or, in other words, the original level.

MIDI Continuous Controller (or control change) messages, on the other hand, are intended to communicate the changing value of a fader (volume), knob (pan), or wheel (modulation). That usually means a separate stream of MIDI data for each control, although it's common to have single values inserted in sequences to represent the initial state of those controls. Most sequencing software programs have some form of automation editor to display, create and edit MIDI Controller data graphically. Those editors usually allow you to view and edit note Velocity as well as other MIDI messages, like Aftertouch, Poly Pressure, and Pitchbend.

Q

What's wrong with my power amp?

I've just bought a second-hand power amp and it has a few 'issues'. There's a slight buzz present, even when the outputs are turned right down. Turning the amp up does not make the buzz any louder. It's a 1200W amp so the level of music coming through it will easily drown the buzz out, but I suspect an earthing problem, and if the amp isn't safe to use then it definitely is a problem! There is a switch labelled 'earth link' on the back of the unit but it doesn't seem to have any effect. My other concern is that the amp sends a large amount of current to the speakers when I turn it on, even with the outputs turned right down — the speaker cones really jump. I find this rather worrying. Could you advise?

SOS Forum Post

Technical Editor Hugh Robjohns replies: The buzz you're hearing is not uncommon in this kind of amp. It could be caused by an earth loop somewhere in the rest of your system, or in the connection to the amp. However, I notice that you say the earth-lift switch on the amp apparently makes no difference, which suggests that the loop isn't created by the amp connection. Two other possibilities are that the power supplies in the amp are less than perfectly regulated, or that magnetic fields from the transformer(s) are affecting the input stages. A good musical/electrical repair shop should be able to help, but you're obviously going to have to get your wallet out!

You can easily check the basic safety of the amp by unplugging it, getting a test meter and checking the continuity of the mains earth from the earth pin on the mains plug (the big one in the middle, for those of us in the UK) through to the amplifier chassis metalwork. While you are at it, you can also check the earth-lift switch. When it's in its closed position, the screen connection of the input sockets should be linked through to the mains earth. With the switch in the open position there should be at least 100Ω between them. The fact that your speakers jump when you turn the amp on is quite normal for many older amp designs, and not a cause for concern.

Q

How can I restore my old half-inch tapes?

Paul White's Ampex reel-to-reel tapes, ready for baking in a chicken-egg incubator. Delicious!Paul White's Ampex reel-to-reel tapes, ready for baking in a chicken-egg incubator. Delicious!I have some Ampex 456 half-inch tape reels that are around 15 years old. I was hoping to record all of them onto my PC. If only life was that easy! The tapes grind to a halt after around a minute or so when playing on my old Tascam 38. I'm pretty sure that this is because of oxide shedding — when the glue that binds the tape together loosens with age. I've heard that you can bake the tapes in a fan-assisted oven to temporarily fix the problem. Do you know of any professional audio companies that offer this service at a reasonable price? I'm not confident about doing it myself, not least because my oven isn't fan assisted!

Jayne Drake

Editor In Chief Paul White replies: There are plenty of companies offering audio format-transfer and restoration services, including some who advertise in the back of SOS. But if you want to do it yourself, don't put your tapes in the oven as this will be too hot. Ideally, you want to bake the tapes for two days at approximately 50 degrees centigrade. They should then play well enough to transfer, although they will eventually revert back to being sticky, so transfer them as soon as possible after baking. Don't try to play them again until then as the oxide shedding may cause permanent audio dropout. I borrowed a chicken-egg incubator to bake some old tapes of mine (see below) and it worked fine.

Q

How do I set up a condenser mic for vocal recording, and which mic should I buy?

I'm thinking about buying a couple of large-diaphragm condenser mics to record some vocals into Sonar 2 XL. I'm going to be re-recording some songs originally recorded live and straight to Minidisc using SM58s. I'd like to improve on the vocal sound while keeping the warm, intimate feel of the original. How far from the mic should the vocalist be, and do I need to use compression? Also, could you help me decide between the Oktava MK219 and the Rode NT1A?

David Johnson

Editor In Chief Paul White replies: You can record as close as six inches to a capacitor mic providing you use a pop shield and take care over your gain settings. At this distance you should need little or no EQ but compression will certainly help give you that intimate up-front sound as well as keeping the level even. The amount of compression required depends on the voice, but I often choose a hard ratio as opposed to soft-knee setting with a ratio value of between 3:1 and 8:1. Use a fast attack, a release of around 100ms, and then adjust the threshold so the compressor just kicks in on the signal peaks, and reduces the gain by no more than 6 to 8dB. This is just a starting point though, and, in the end, you must judge everything by ear as every compressor and compressor plug-in works slightly differently. Also, take note of the environment you are recording in and make sure there's something acoustically absorbent behind the singer.

Either of the mics you suggest will produce fine results and the choice of which to go for really depends on which suits the specific voices best. I use a Rode NT1 in my own studio and really like it. The NT1A should have a more extended high end and be even quieter, but if you like that warm, dynamic mic sound without too much high end, an original NT1 might suit you better.