Get to grips with parallel processing in Studio One, and open up exciting sonic possibilities...
Parallel processing is a technique reported to have come from the studios of New York in the '70s and '80s. Unlike techniques like hand flanging, which rely on subtle tactile interaction with a mechanical device, parallel processing translated exceptionally well from analogue to digital. In fact, I would argue that a DAW is an inherently better environment for parallel processing than the analogue world.
The most common application of parallel processing is parallel compression, in which a compressed version of a track is mixed with the uncompressed version. Although it is not necessary for the signal to be heavily compressed, it is typical, as is bringing only a bit of this 'squashed' track into the mix.
What parallel compression gives you is an evenness of level, which is desirable in mixes with many elements that play simultaneously through a song, such as pop mixes. At the same time, the presence of the original signal maintains tonal and transient qualities that are often altered by compression. The resulting sound is even, yet punchy.
In Studio One v2, drag-and-drop makes it particularly easy to use parallel compression, so I'm going to explain how I would set up snare-drum compression. Imagine you've got a snare recording (or dig out something suitable) and proceed as follows:
Sending from both snare channels enables me to have a different mix for the compressor than the mix going directly to the snare submix. Most of the time, the compressor mix ends up with more of the brighter bottom-mic channel than the drum submix.
Many people prefer to accomplish parallel compression by duplicating a track and instantiating a compressor on the new track. I often take a different approach, in which I use a send from the source channel to an aux channel, where I put an instance of the compressor. There are a few good reasons for doing this: first, it uses one less playback channel, which lightens the load on the processor a bit (especially if I'm using parallel processing on a number of tracks); second, the compressor feed includes any basic EQ or other corrective processing I may have put on the channel; and, third, I am free to edit the original track any way I want without having to worry about conforming a duplicate track. In circumstances where I want complete independence of the original and processed tracks, however, track duplication is the method I use. I create a submix of the original track and its processed tracks. This makes it simple to control the composite signal in the mix, as well as apply any processing I want to throw on the combined mess.
Parallel compression could quickly become a favourite technique for you, and there's no reason to stop there; parallel processing is a powerful concept with many applications. Here are some examples of how it can fatten up guitars, for instance...
For these tricks, you'll need to record a clean guitar track. Duplicate the track using the Duplicate Tracks With Events command instead of using the send method. Now put different amp models on each track, using the bundled Ampire plug-in or your favourite third-party amp emulator. Experiment with sounds that are only slightly different from each other and ones that are very different, like having one clean and one distorted. Adding a little varying delay to one track can enhance the effect even more. Be sure the mix on the delay is set to all wet (delayed) sound.
Left and right delays are often used to make a guitar sound huge. I like to use delays in the 50-120 ms range, depending on the desired effect. If the delay plug-in is on its own channel, the delays can be EQ'd, flanged, or otherwise processed in a different way to the original. You could even go one step further and give each delay its own channel, allowing the delays to be processed differently to each other too.
Micro-pitch shifting is a common technique for widening the perceived image of a sound source. In this scheme, frequently applied to bass or vocals (both of which are generally panned to the centre), a very small positive pitch-shift (in the order of a few cents) is applied positively to a signal and panned to one side, while an equal downward pitch-shift is also applied and panned to the other side. The original signal is kept in the middle. Very short (under 10ms) delays are also commonly applied to the pitch-shifted versions. If the pitch-shift plug-in is on its own channel, other processing, such as a wee bit of high-frequency boost, can be applied to the pitch-shifted versions to enhance the widening even more.
Parallel processing can also be useful viewed from the opposite direction. On many occasions, I have processed video-game dialogue so radically (for creature effects and similar) that the sound is great, but the intelligibility is no longer acceptable, especially in the context of a noisy first-person shooter. Sneaking a little of the original voice signal back in can restore intelligibility without losing the impact of the awesome processing.
Once you start thinking about parallel processing, all kinds of possibilities start coming to mind. Not all the ideas you come up with for using it will end up sounding as cool as you think they might. On the other hand, certain parallel processing techniques, such as mixing an EQ'd channel with one that is not EQ'd, can really surprise you by how well they work. I've encountered some of these surprises when comparing the original and parallel-processed versions in context. (For the best comparison, find a way to level-match the two versions and then switch between them.)
While you might describe parallel compression or processed delays as neat tricks, the greatest power comes from parallel processing when you think of it as a technique that can be applied across many different situations. Taking this wider view of the idea can turn parallel processing into a key component of your audio production style.