I've noticed that many new mic preamps offer variable input impedance. What specific advantages are there to this feature? Will I only get the benefit when using certain microphones?
The M Audio Tampa, Focusrite ISA428 and Groove Tubes Vipre all feature variable input impedance.
Increasing the impedance means that the microphone has to supply less current, which can help it to produce a greater signal level. More level from the source means less amplifier gain is required in the preamp, which means less noise overall. A higher impedance also means there will be less HF loss from the inherently capacitive cable. This translates as a brighter, clearer sound, often with a slightly more apparent room acoustic.
Reducing the input impedance places greater demands on the microphone to supply current, and this can cause all sorts of odd effects with some mics. In general, dynamic mics will respond to a lower input impedance by producing a more uneven frequency response, as resonances in the electro-mechanical system become more emphasised. This can be thought of as a kind of 'free EQ', though its effects are rather unpredictable and not always useful!
In any case, all of these effects are fairly subtle. They're obvious enough when making direct comparisons, but you won't usually be aware of them in a complete mix. Dynamic mics will almost always show the greatest effects, while good-quality transformerless mics will usually show the least change.
Most of my sample CDs are audio format and have several loops on each track. Is there an easy way to break those tracks up into individual loops, or do I have to carve each one out in my sample editor.
Barry TaylorSOS contributor Len Sasso replies: You're undoubtedly going to have to do some fine-tuning in a sample editor to get precise loops, but you can significantly speed up the process if you have software that will detect the silent portions of an audio file and strip them out, leaving the remaining portions as individual regions. Digital audio sequencers such as Logic (Strip Silence) and Cubase (Detect Silence) have this feature, for example. Although not quite as convenient, you can also use beat-slicing software such as Recycle to extract the loops.
Silence detection works by searching for areas in an audio file where the level drops below a given threshold for a specified minimum amount of time. The threshold and minimum time are the critical settings. They ensure that you get whole loops rather than breaking a single loop into several slices (over-slicing) or capturing several loops in the same slice (under-slicing). Most sample CDs separate each loop by the same amount of time, typically around half a second. Measuring the space between a pair of loops and using that for the minimum-time setting will usually prevent over-slicing. If you don't get enough slices, reduce the minimum time in small increments. Usually the space between loops is true silence, so the threshold can be set as low as possible to prevent under-slicing. If you don't get enough slices, increase the threshold in small increments. If your slice detection software allows for pre-roll and post-roll settings, set both to zero.
Once you have the CD track sliced into individual regions, you will most likely need to adjust their end points manually in a sample editor. Being played by humans, the loop tempos may not be exactly as indicated. You can often get away with shortening or lengthening the region by a few milliseconds to exactly match the target tempo. Loops (especially percussion) often contain a pickup at the beginning or tag at the end to make them work well as single shots at the beginning or end of a phrase. In that case you will need to adjust the endpoints to extract the 'loop within the loop'.
Finally, depending on the software you're using, the silence detection function may only produce regions within the audio file, rather than separate audio files. That's all you'll need if you're going to use the loops in the same host, but if you want to use them separately, you'll need to export the individual regions. Most digital audio sequencers and loop slicing packages offer a batch export function for that purpose.
I read in the Cubase VST manual that with the True Tape option switched on it's impossible for your inputs to clip. Does this mean that my recordings will never sound overloaded, and that I don't need to watch the meters any more? If so, what Drive setting should I use for best results?
The True Tape recording option in Cubase emulates tape saturation and compression.
In Cubase VST and SX, True Tape is only an option when recording in 32-bit, and with the True Tape switched on, internal overload is indeed nearly impossible, even if you run the Cubase mixer meters way into the red. However, True Tape operates at the point where the 16-bit or 24-bit recording from your soundcard is converted internally into the 32-bit format used within Cubase, so there's no way it can prevent overloading before this point.
Here's an example. I recently recorded some stereo tracks with True Tape set at 6dB Drive and noticed that both channels of the stereo file peaked at exactly -2.16dB on the Cubase meters. Since I was recording a repeatable performance from a MIDI synth, I was able to rerecord it with True Tape switched off. Sure enough, at two points during the performance the recording momentarily clipped, but True Tape's saturation characteristic masked this. This shows that you have to be even more careful with your input levels with True Tape switched on, since you won't know from Cubase whether your soundcard is running into clipping. Instead, watch the meters of your soundcard's mixing utility, which are ahead of Cubase in the signal chain.
You might wonder why anyone would go to this trouble if they wanted True Tape to add some distortion anyway. Well, while True Tape adds some desirable third-harmonic distortion at controlled levels, clipping generates a whole series of harmonics, including the seventh and higher, which sound very harsh by comparison, and can be audible even at levels as low as 0.01 percent. Not what the doctor ordered in the majority of cases.
I understand that S/PDIF is unbalanced and AES is balanced, and that for longer cables it is best to use AES. If I'm using cable that is under six feet long, would it really make a difference using AES over S/PDIF?
Technical Editor Hugh Robjohns replies: It all depends on the type of cable you are planning to use, what you're using it for, and the quality of the driver and receiver circuitry involved especially the latter's clock recovery circuits.
AES is specifically intended for long cable runs, and, being balanced and at a higher amplitude, it is more immune to interference and the effects of cable-induced jitter. S/PDIF on the other hand is far more prone to both of these effects, and was designed as a more cost effective and practical solution for interfacing components in a domestic digital hi-fi situation.
You can run S/PDIF up to a metre or two without any problems at all, almost regardless of the type of cable, and if you use decent 75(omega) coaxial cable (proper 'digital' cable or standard video coax) you can run an S/PDIF signal several metres.
However, if you are intending to run an S/PDIF signal to a D-A converter I would say keep the cable as short as you possibly can, because cable-induced jitter will affect the decoder's clock in a detrimental way. The D-A relies directly on the embedded clock signal and cable jitter will mess this up. The longer the cable, the more capacitive it is and the greater the jitter will become. Very few budget D-A converters have decent jitter-rejection properties able to cope with the effects of long cables. Jitter can be heard as a vagueness in the stereo imaging and very flat, two dimensional sound stages on well-recorded acoustic material.
If you are running a signal between, say, a desk and a recorder (or vice versa) then cable jitter has no relevance at all provided it is not extreme. Only the data value is relevant in this application, and not its precise timing, so cable length is less of an issue. In either case, though, give the system the best fighting chance you can by using either proper digital cable intended for S/PDIF applications, or decent low-loss 75(omega) video coax and not an RF TV cable.
I read Mallory Nicholls' Studio Installation Workshop in May's SOS with great interest, but I have one question about it. He writes that "the most common cause of reduced bass in a room is objects or walls acting as absorbers". I was under the impression that this would actually be a good thing. I thought that the main problem with the low end is that of standing waves causing constructive and destructive interference at different points in the room. This effect can be tamed by placing bass traps to prevent the reflection of sound waves back into the path of the direct sound. So it would seem that, for the most accurate and even bass response (and indeed an increase in the perceived level of bass in many parts of the room), as much bass trapping as possible would be a good idea, particularly in small rooms like the ones most of us are stuck with! Am I wrong?
SOS Forum Post
Technical Editor Hugh Robjohns replies: It's a question of balance. Standing waves cause significant peaks or troughs at specific frequencies, resulting in a lumpy bass response, and that is clearly undesirable. However, large sofas, plasterboard walls and big patio windows all act like wide-band bass absorbers, either because they actually do absorb the bass energy, or because they let it pass straight through (referred to as 'transmission' in the business). Most loudspeakers are balanced for use within a 'typical' room and the bass end is optimised based on the expectation of a degree of LF support in other words, the assumption that a certain amount of bass energy will be retained within the room through reflection from room boundaries. If the walls, windows and furnishings absorb or transmit (allow to pass through) more bass than expected, you can end up with a bass-light overall sound which is clearly undesirable as well.
This is a particular problem with modern houses. Older houses are often built largely of block or brick with plaster directly on the solid walls. They also generally have smallish windows and so there's likely to be a lot of internal reflection of low frequency sound within the room. In my experience, most speakers are balanced for use in this kind of environment which probably says more for the kinds of houses most speaker designers live in than anything else!
In contrast, modern houses are often less solidly built. Many modern houses have stud internal walls and even the external walls are lined with sheets of plasterboard separated from the block walls by a small air gap. In such cases, both internal and external walls act exactly like wideband bass absorbers. Also, modern features like large patio windows allow bass to pass through more or less unaffected. The consequence of these modern building techniques is a general lack of low-frequency sound energy. That's not to say you can't still suffer troublesome standing-wave effects, just that the overall sound will be lacking in bass energy generally.
So, returning to where I started, it's all a question of balance. Too little absorption can result in standing waves, although these occur at specific frequencies and require narrow-band absorbers to rectify the problem. Too much absorption (or transmission through walls and windows) can result in general lack of LF energy.
My Emagic ES2 soft synth has two filters with the option to arrange them in series or parallel. Aside from going through the factory presets to see how the designers have used them, could you give me some pointers on how and why to use more than one filter and the difference between series and parallel?
Emagic's ES2 and Virsyn's Tera are just two soft synths equipped with multiple filters.
Both ES2 and Tera are good synths for experimenting with multiple filters. Both have easy-to-use matrix modulation schemes. Tera offers more filter options, but the nice thing about the ES2 for getting started is that you can flip between series and parallel configurations with one click. Any modular synth will also allow lots of flexibility in filter routing. Whatever you use, start with a fairly rich source a couple of sawtooth oscillators tuned a perfect fifth (seven semitones) apart, for example. Noise is another good source. The idea is to have a broad frequency spectrum to work with, otherwise you won't hear much effect.
Once you've spent some time with various high-pass and low-pass configurations, try some other filter types and try to picture the resulting filter curve as you work. Combinations of band-pass and notch filters in parallel should be easy to picture if you've worked with multi-band EQ. Again the difference is in the modulation. Series configurations are most effective when at least one of the components is high-pass or low-pass. The other component should have its frequency in the passed region. Using multiple filters is a great way to get some subtle motion into your sounds.