I thought I knew what a modular synth was from my days in the Moog lab at school, but now all kinds of software synths call themselves modular — what does it really mean?
SOS contributor Len Sasso replies: The term modular does have a great deal of buzz value these days, which is why many stand-alone and plug-in software synthesizer designers are jumping on the bandwagon. While it seems unreasonable to insist that a software modular be a direct emulation of a classic modular synth, there are two basic criteria for distinguishing synths which are in the spirit of the classic hardware modulars.
One criterion is the routing flexibility of the audio and control signal paths — the whole concept of distinguishing audio from control signals may seem unnecessary in software, but it's actually quite useful. Control signals can be sampled at a much lower rate — thereby using less CPU power — and still do their job.
The other criterion is that there be modules of roughly the same function as in the hardware modulars. If the modules are too atomised, as in applications like Reaktor, Infinity or Max, you have more of a graphic computer language for constructing synths than a modular synth. At the other end, if the modules are full-on synths as in products like Reason and Storm, you have a rack of synths, not one modular synth. In a modular synth, the modules have a specific function — oscillator, filter, envelope, LFO, and so on — and have their own control panel for setting them up. Furthermore, there has to be a sufficient variety of modules of each type to allow for interesting combinations.
What constitutes a sufficiently flexible routing scheme is a little more fluid. Certainly the use of virtual patch cords is not a requirement (though that can be fun). Many classic hardware modulars relied on a patching matrix rather than patch cords. It seems reasonable to require that in one way or another, most audio outputs can be routed to most audio inputs, that most audio outputs can be also used as modulation sources, and that most control outputs can be routed to most control inputs. In short, you shouldn't quickly run out of ways to reconfigure the system and have to rely mainly on different control settings to produce different sounds. Software and hybrid synths that seem to me to fit that bill include Arturia's new Modular System, Clavia's Nord Modular, Software Technology's VAZ and Virsyn's Tera.
I'm relatively new to using compressors and am left confused when I see people recommending the use of 'a slow release', for example. How slow is slow? While I realise this might seem a bit like asking how long a piece of string is, what value would you start at if somebody told you to try a slow release?
SOS Forum Post
Technical Editor Hugh Robjohns replies: Compressor attack times usually range from tens of microseconds up to a few milliseconds, while release (recovery) times range from tens of milliseconds up to several seconds.
The attack time determines how quickly or slowly the compressor reacts to a signal once it has crossed the threshold level, so a slow attack time means the signal's opening transient will exceed the threshold and be uncontrolled for a while before being reduced by the action of the compressor. This means you have little absolute control over peak levels, but allowing the transients through can add a dramatic punch effect to percussive sources. On the other hand, a very fast attack will help the compressor to react quickly and thus exert more peak level control. However, its rapid response can also distort the shape of the source transient, which may become audible as transient harmonic distortion, particularly with harmonically simple instruments such as pianos, flutes, and so on.
The release time determines how the compressor responds after the signal has fallen back below the threshold, such as when it gets quieter or stops. A fast release time tends to keep the material sounding loud, but can also make the changing level of underlying sounds very obvious — the classic 'breathing' or 'pumping' side-effect. In extreme cases, a fast release on a bass instrument will have the compressor tracking the envelope of the source frequency, with very unpleasant effects. A slow release time tends to sound very flat and lifeless, or just very very smooth, depending on the source. With percussive sounds, a slow release could mean that the first transient causes the compressor to put in loads of gain reduction, and it then takes so long to recover to unity that following transients are lost in the low-level output. This 'hole punching' can be a real problem if trying to compress drums and the like.
You need to adjust both the attack and release times to suit the dynamic behaviour of the source. Many devices have an auto-release option where the release time adjusts automatically according to the nature of the material itself, which is almost always the best option.
I own a Behringer 32:8:2 desk and am currently using a Fostex M80 eight-track recorder. However, I'd like to upgrade to a 16-track digital machine and was wondering if I'd be able to wire up my existing desk to a 16 track machine, or whether I'd have to change the desk?
Editor In Chief Paul White replies: You should be able to use your existing desk with a 16-track machine in one of two ways. The first option is to split the buss outs from the mixer so they feed recorder tracks 1, 9 and 17, followed by 2, 10 and 18, and so on, so you can select which tracks to record on the recorder front panel using the Record Ready buttons. Some recorders have this linking option built in. You could also do a similar thing with a patchbay, but of course you can only record a maximum of eight tracks in one pass in this way. An alternative is to combine one of these two options with direct channel outputs from the mixer so that you can record up to 16 tracks at a time (if no dedicated direct outs are fitted, you can use the channel insert sends to route individual chanels to the recorder), and, again, a patchbay makes this easier.
You could also buy some dedicated voice channels or mic preamps and record directly to the recorder using these in addition to the mixer's buss outs. It's also possible to use surplus pre-fade aux sends you have available to route a signal from one (or a mixture of) mixer channels to the recorder via the relevant pre-fade aux out socket. Keeping the channel faders down for these channels prevents them going anywhere else they shouldn't.
The mix could can monitored via 16 mixer channels and routed to the stereo mix in the usual way. If you don't need to record more than eight tracks at a time (you can handle more if you buy more preamps or voice channels), you could even accommodate 24-track recording and mixing using this desk, and given some of the bargain hard disk 24-track machines around at the moment, that might be a good option to consider.
In a previous article SOS published about stereo microphone techniques, it was written that "...it is essential to calibrate the microphones and their channels at the desk before attempting to record anything in stereo. Even nominally identical microphones will have slightly differing sensitivities, and the input channels in the desk could be set up completely differently — so it's important to run through a line-up procedure..."
In this process, a desk with EQ bypass and phase-reverse switches is required, but I've not come across a compact design (even Mackie's 1604 VLZ) that has these features — are there any alternatives? I've thought about placing a phase-reverse switch before the desk's preamps, but I don't know if this will work. So is there a way to perform such a line-up with my desk or do I have to go for a more feature-packed one?
SOS Forum Post
Technical Editor Hugh Robjohns replies: The lack of EQ controls and a phase-reverse switch is a common problem. I often use a Mackie 1402VLZ myself, and although there's no EQ bypass facility, at least the gain controls have centre detents so you can be certain that the controls are accurately centred.
I often use in-line XLR-XLR barrels which are wired internally with crossed connections to impose a polarity inversion. These can be inserted simply on any channels that require a phase reverse, are obvious when in circuit, and are a cheap and easy solution. They're available from all good audio suppliers, but can be made by hand just as easily if necessary.
As long as you make sure the EQ is centred before starting the line-up on both channels, and you use the phase-inverting connector as described above, there should be no problem. This is how I work myself when aligning stereo mics using my little Mackie desk.
I played a bass sound on my JV2080 and noticed that the note at A1 sounded like it had a lot more 'punch' than the one programmed at E1. I can restore the 'punch' of the E1 bass note by using EQ and and other manipulation in a wave editor, but what causes this phenomenon when extracting that E1 note directly from the synth?
SOS Forum Post
Reviews Editor Mike Senior replies: It does sound like the kind of problem you get from room modes or a badly set-up speaker system. You can check this by listening whether the problem is there on headphones, or if it's the same on another speaker system in a different room.
However, I recently had this problem both on speakers and headphones, and in the end I found that it was because of the interaction between the synth's overtones and its filter. If the filter is set to track each note exactly then you won't get a problem, but few patches are actually set up like this, because the changing timbre over the instrument's range is usually desirable. And, of course, high-resonance filter sweeps are now also the norm. The downside of having a filter which doesn't exactly track the notes is that if there is any resonance applied to the filter then the filter resonance peak will hit different oscillator harmonics depending on which note you're playing. My solution to this for bass notes is to remove the resonance or set the filter to track the notes exactly.
The down side of this method, though, is that it messes with the essential elements of your sound. An alternative method I use, which doesn't suffer from the same problem, is to use separate bass and sub-bass sounds, both triggered from the MIDI part. For me, the aforementioned filter resonances seem to be most intrusive when they affect the lowest harmonics, so I filter those out of the main bass sound, and replace the missing 'weight' with a sine-wave or triangle-wave sub-bass patch — this of course will have had the filter tweaked for a smooth response as described above. This gives the best of both worlds: the timbral changes associated with filter tracking (or even high-resonance filter sweeps if you want), but without the low end getting out of control.
About six months ago, some users of the Roland VS2480 discovered that their unit's inputs suffered from significant harmonic distortion for signals above -6dBFS, well outside of Roland's published specs, resulting in audible degradation of sound quality. There have been many discussions of this on the VS Planet web site, and one sharp user discovered that the distortion could be significantly reduced by temporarily increasing the preamp gain to push the input signal well into distortion — a trick nicknamed the 'Falcon Eddy Crank'. Once the level was reduced to a suitable level, the harmonic distortion was reduced, although it creeps back up after a few hours. However, some who've measured the harmonic distortion after the 'fix' claim that harmonic distortion levels still remain well outside of published specs.
Roland US acknowledged the problem, and one of the technical support personnel responded on the Roland US web site with this statement: "We have verified that as you increase the level on the analogue preamps there is an unusual amount of harmonic distortion introduced when you reach the -6dB to -4dB range. Clipping the input and then backing the level down to the desired setting fixes this. So, as a matter of practice, it is recommended that you raise your input levels past the point of clipping (0dB) and then adjust them down to a proper recording level (-12dB to -4dB) from there. If you record in this manner, you should have no problems getting a good, clean recording. The engineers are researching possible solutions for a more permanent fix, but we don't have any more information on that at this time."
I think this deserves a bit of investigation, and is information that potential buyers of the VS2480 should be made aware of.
A workaround I've found for this distortion problem is to have all the 'Att' (attenuation) controls on your input channels set to +6dB. If you then adjust the preamp gain for the appropriate meter reading, you avoid the distortion. Interestingly, if you set the Att controls to, say, -6dB, you see the distortion start to creep in at -12dBFS. So is it the input stage that's the problem, or the relationship between the digital attenuators and the input stage? I suspect it is the latter, and if it is then the converters would be operating at full spec. The Falcon Eddy Crank doesn't fix the problem because the distortion creeps back in, and driving the inputs that hard before each recording can't be good in the long run. At any rate, if you create a setup patch with Att set to +6dB for each input channel as your starting point for each project then you can get to work without worrying about distortion at all.
Martyn Hopkins, Roland UK MI Marketing Manager replies: For the last few months we have been following the discussions on the SOS Forum and the VS Planet web site, and have through these sources become aware of a problem with harmonic distortion on the inputs of the VS2480. For any UK-based VS2480 user experiencing harmonic distortion problems, we are offering to update their VS2480 free of charge until June 1st 2003. There is currently a two-week turnaround for this procedure, and any customers should contact Roland UK's service department prior to arranging any shipment. The Roland service department is open from 1pm until 5pm on Monday to Thursday, and 1pm until 4.30pm on Friday, and can be contacted on +44 (0)1792 702701. The large majority of VS2480 users have not experienced any problems regarding this, and can continue to use their VS2480 without modification.
I've read several times that a DI box for guitar or bass is pretty much useless if you're only running five to 10 feet between the guitar and mixer, and was wondering if most people really hear a difference in quality with a fairly good active DI box, or is it a myth?
Also, if given the choice between a stand-alone DI box and a channel strip, which would you choose? This is strictly for electric guitar and bass recording, with further processing to be applied via IK Multimedia's Amplitube.
SOS Forum Post
Technical Editor Hugh Robjohns replies: The first point is certainly a myth since the pickups used in electric guitars and basses will only perform as intended if driving a very high impedance. The line inputs of virtually all sound desks and soundcards will rarely offer an input impedance greater than 50kΩ, and often more like 10kΩ. They're also typically far too insensitive for the low output level of a guitar — mic inputs, while obviously a lot more sensitive, will offer input impedances of under 3kΩ. In contrast, an active DI box (or a dedicated DI input on a desk) will provide ideal sensitivity and an input impedance of 1MΩ or more.
With your second question, if you are happy simply to pass the guitar signal into the computer and process there, a good active DI box is all you need, assuming you have the means to accommodate the relatively low output level from the DI box. On the other hand, if you want to be able to process the signal with EQ and dynamics before it goes into the computer, a full preamp with a dedicated DI input may suit your needs better. The latter can often be fitted with optional A-D converters too, which are often better than those fitted to soundcards.
My bandmate, who has more audio experience than I, says that you should always equalise first then compress after. His arguments are that the EQs in mixer channel strips come before external compressors and that multi-purpose plug-ins like Waves' Audiotrack always put the EQ first. I think it usually sounds better to compress first and it makes more sense to me to even out the signal dynamically before boosting or cutting individual frequency bands. Who's right?
SOS Forum Post
SOS contributor Len Sasso replies: You're both right. It's more common to EQ first, which is why multi-purpose tools are usually constructed that way. But it's not unusual to do it the other way around, and a good engineer will make the choice to suit the material.
Compressing first generally allows you to apply more EQ without risk of clipping, thus permitting more extreme spectral control. On the other hand, if you EQ first, you often need less compression and wind up with a smoother, more even tone. Since it's not difficult to try both, what can you lose? A third, more flexible choice is to us a multi-band compressor. These apply separate compression to individual EQ bands and avoid the pumping often associated with wide-band compression.
Q. What is the advantage of recording with a higher resolution, as opposed to a higher sampling rate?
I understand the advantage of having a higher sampling rate because it allows higher frequencies to be reproduced, but I don't really understand what the advantage is of having more bits. Why is 24 or 32-bit recording better than 16-bit? If I have to limit one or the other, which is more important?
SOS contributor Len Sasso replies: As you say, a higher sampling rate allows higher frequencies to be reproduced before unwanted aliasing occurs. The highest frequency that can be recorded at a given sampling rate (called the Nyquist frequency after the mathematician who first analysed the problem) is half the sampling rate. For example, if you're recording at 48kHz, you need to limit (meaning filter out) input frequencies above 24kHz. Any frequency component over that will be folded back to yield inharmonic components at 48kHz minus the component frequency.
The bit depth determines the dynamic range that can be accommodated, and as with sampling rate, more is better. If you envision the signal graphically as a continuous wave, both factors affect how accurately you can capture and represent the wave. The sampling rate represents quantising in the time domain, and the bit depth represents quantising in the amplitude domain. Imagine yourself trying to draw a sine wave on a piece of grid paper while limiting yourself to points where grid lines cross. The horizontal spacing represents the sampling rate and the vertical spacing the bit depth. You need fine spacing in both dimensions to get a good representation.
Another way of looking at bit depth is in terms of dynamic range. The dynamic range of a digital system, the difference in decibels (dB) between the loudest and softest levels it can reproduce, is roughly six times the bit depth. Thus a 16-bit system has roughly a 96dB range while a 24-bit system has a 144dB range. That becomes especially important for DSP effects that can cause radical spikes in level. Incidentally, recording at low levels has the effect of reducing bit depth. You can restore the level by normalising, but nothing will restore the lost resolution.
Although the sonic effect of compromising one or the other differs, it's impossible to attribute more importance to sampling rate or bit depth. That's why higher-end systems have higher limits for both. You'll find bit-crushing and down-sampling plug-ins in most DSP software — try them and hear the difference.
I've got a Roland VP9000 and I'm trying to run it with the V-Producer software via OMS. The MIDI interface is a MOTU MIDI Express, and for some strange reason, the V-Producer software keeps telling me the VP9000's MIDI is off-line. I've used OMS on several occasions before with no problems.
Reviews Editor Mike Senior replies: I've not used the V-Producer software, but it sounds like a problem in OMS to me. The VP9000 itself senses whether the MIDI stream is interrupted, presumably by keeping an eye on MIDI Active Sensing messages. If you're getting other MIDI messages through to the V-Producer software (in other words, if OMS is passing MIDI channel data) then the problem is probably that OMS is doing some crafty redirection of unchannelised Active Status messages. A quick glance into OMS's settings didn't give me much in the way of leads, but hopefully you can find something to sort this out.