You are here

Vocal Point

Sonar Tips & Techniques
Published July 2015
By Craig Anderton

The lower waveform is the original vocal, while the upper one has been tweaked to even out levels.The lower waveform is the original vocal, while the upper one has been tweaked to even out levels.

Polish your vocal recordings in Sonar.

Vocals are the most important part of a song, because they not only tell a story, they are also the primary means to connect with your listener’s emotions. And you don’t even need to have a voice with operatic quality — think of all the singers who had marginal voices (Bob Dylan, anyone?) but became celebrated because they were able to project a personality that people liked. However, you want to make sure that your voice is always being presented in the best possible light, and that involves editing as well as mic technique, EQ and dynamics.

A Question Of Balance

It’s common to add compression to vocals to even out the sound, so why does it often sound like there’s something ‘wrong’ after compression has been applied? Simple: you want a consistent volume going into the compressor, otherwise if you set the compression to even out the lower-level signals, it will over-compress the higher-level ones, which interferes with dynamics by robbing the peaks of their power. When the incoming level is more consistent, you can use less compression — so, paradoxically, an incoming signal with less dynamic range can sometimes result in a signal coming out of the compressor that sounds like it has more dynamics.

When it’s time to mix, I solo the vocal and listen to it phrase by phrase. Sections where individual words or phrases are lower in volume, and not intentionally so, can be either normalised (to save time I’ve set up a keyboard shortcut for this — Process / Apply Effect / Normalize) or have their gains raised. This can also work the other way around, by reducing a word if it stands out too much. Note that if you select a section to boost that’s after any breath intake sounds, then those will sound softer in comparison to the boosted section.

While you’re in editing mode, it’s a good time to do some other tweaks. It’s worth cutting the silent sections where singing isn’t happening, and fading into and out of the wanted vocal recording. Note that the ‘Remove Silence’ DSP process has a bug where the attack and decay times are always 0, so you can’t use it to add fade-ins or fade-outs. However, if you select multiple clips and apply a fade to one of them, Sonar will add a short fade to all of them simultaneously. Where you’re fading over a ‘p’ or ‘b’ sound, longer fades will reduce popping.

If a doubled phrase doesn’t end at the same time (eg. one held note lasts longer than the other one), split just the word with the held part and use the timing tool (shortcut: Ctrl-click and drag the clip edge) to stretch or compress just that one section. You may need to crossfade the beginning of the stretched clip with the end of the previous clip, or add short fades, so that there’s no click at the transition.

Listen for mouth ‘clicks’. If they’re short enough, you can often cut these out, then slide the two sections on either side of the click together for a crossfade.

Also, for some reason, Sonar seems very tolerant of selecting snippets of audio within words. For example, part of the word might ‘poke out’ while the remainder is lower; if you’re careful with your selection, you won’t hear any clicks or pops due to the gain change (although you should always audition the edit to make sure).

Finally, even though compression is the default choice for vocals, a good limiter can sometimes give a more natural sound. When I need compression the CA-2A seems to flatter my voice the most, while for limiting, the Concrete Limiter does a great job.

Department Of Corrections

Many otherwise rational people think pitch-correction has taken all the soul out of vocals. That’s not correct: people who don’t know how to use pitch-correction take the soul out of vocals. Actually, pitch-correction has let me put more emotion into my vocals, because I’m not judging myself about the pitch as I’m singing. I just sing and don’t worry about it, knowing that if there’s an errant note or two, it can be fixed. Used judiciously, pitch-correction can encourage you to sing with greater abandon — and often, that’s what a vocal really needs.

However, I never select a clip and do wholesale pitch-correction. Listen to your vocal carefully; if you hear something that actually sounds wrong, isolate that phrase and open it in Melodyne. The Melodyne Essential version included with Sonar can do basic pitch-correction, but as many Sonar users have discovered, the upgrade to Melodyne Studio gives much greater flexibility.The top blob is the original held note, and the middle one has been split where the vibrato got nasty. Note how Melodyne has detected that the average pitch of each segment has changed. The lower blob has been pitch-corrected and also has reduced vibrato.The top blob is the original held note, and the middle one has been split where the vibrato got nasty. Note how Melodyne has detected that the average pitch of each segment has changed. The lower blob has been pitch-corrected and also has reduced vibrato.

For example, suppose the vocal ‘runs out of steam’ towards the end of a note, where the vibrato gets shaky and the pitch drops or rises. If you select that note, Melodyne will base its correction on the average pitch. Splitting the clip at the vibrato’s zero crossings changes the average pitch for each segment, so Melodyne detects the pitch accordingly. Now if you correct pitch, each segment will fall into line (if not, one segment may go sharp or flat a semitone, in which case you should move it into line with the others). Furthermore, if you have Melodyne Studio or above, you can use the Pitch Modulation tool as needed on each segment to make the vibrato more consistent, or reduce (or increase) the vibrato amount.

Tightening Time

VocalSync is rapidly becoming one of my favourite new Sonar features because of how it can tighten up vocals. It does this by designating one vocal as the ‘guide’ track, after which you open up the ‘dub’ track as a Region FX (like you do with Melodyne). Turning the VocalSync dial causes the dub waveform to move and stretch so that it matches up with the guide track’s waveform. However, to get the best results from VocalSync, it’s crucial to understand that the VocalSync dial is not calibrated, and for a reason. This is because turning it up doesn’t necessarily tighten vocals more; instead, there’s a ‘sweet spot’ along the knob’s travel where vocals line up best. This can happen anywhere from being slightly turned up to being turned up all the way.

While you’re adjusting VocalSync, the audio is in preview mode. This lets you know how the vocals are sync’ing up, but the audio quality isn’t that great — you need to render it, just as you do with transposition or time-stretching. Of course, you can always undo it if it doesn’t sound as expected.

The longer the clip you want to sync, the harder the algorithm has to work. Splitting a longer phrase into smaller pieces will often give optimum results.

For the best visual feedback, zoom in and increase the vertical track height. This makes it easier to line up the waveforms visually, which helps when VocalSync is in Preview mode.The blue is the ‘guide’ track, and the white, the ‘dub’ track (these have been coloured for clarity). The lower waveform shows the uncorrected version. Note how the dub-track note outlined in pink ends too soon, the note outlined in orange is too long, and the notes outlined in red have transients that are way off compared to the guide track. The upper waveform shows the post-VocalSync processing.The blue is the ‘guide’ track, and the white, the ‘dub’ track (these have been coloured for clarity). The lower waveform shows the uncorrected version. Note how the dub-track note outlined in pink ends too soon, the note outlined in orange is too long, and the notes outlined in red have transients that are way off compared to the guide track. The upper waveform shows the post-VocalSync processing.

Also note that you can choose different algorithms for the online preview and the offline rendering. The default algorithm for rendering is (not surprisingly) Radius Solo Vocal, but for some voices other algorithms may work better (if you’re James Earl Jones, try Radius Solo Bass). To change algorithms, with the clip selected and the VocalSync Region FX window open, choose the Clip tab in the Inspector, open the AudioSnap section, then choose the desired algorithms for online and offline Render.

When All Else Fails...

As you do your vocal editing, you may run into sections that need to be redone. Although the comping method introduced in Sonar X3 has sort of taken over as the preferred way to fix these kinds of issues, don’t forget about the merits of punching. Simply drag over the region where you want to punch, then enable the Auto-Punch Toggle button. To remove the area over which you want to punch, hit Ctrl-X to get rid of it, or click the region’s handle and then drag the region off the track to remove it. If you do a lot of punching, consider inserting a temporary blank track and shift-dragging the selected region to that track, in case you change your mind and want to revert to the original region.

However, remember that having the Auto-Punch Toggle button enabled has tripped up many a Sonar user who couldn’t figure out why the program wouldn’t go into record when trying to record outside of the punch zone.

Published July 2015

Issue navigator