With Cubase 4.1, Steinberg overhauled the Sample Editor, creating a very powerful new tool for real-time pitch and time manipulation.
Back in this column in SOS November 2007, John Walden took us through how to use the time‑stretching and Audio Warp functions in Cubase 4. But no sooner had that issue hit the shelves than Steinberg decided to release version 4.1, which brought a significant overhaul of these functions in an attempt to make them easier to use. Of course, there are still several articles and tutorial videos floating about on the Web that instruct you how to do things in the earlier incarnation of Cubase 4, and if you managed to download the update without also separately downloading the updated manuals (as I initially did) you'll probably be very confused. So before I go on any further, I can't stress enough how important it is to keep the Cubase manuals in step with your software updates.
We're now on v4.5 and it's high time we investigated the new features, so in this article I'll take you through what's changed, and consider a few potential applications for Hitpoints and Audio Warp along the way.
There isn't really much new 'under the bonnet', because most changes have been to how you access the existing audio processing functionality, and how things are presented in the GUI. There have also been a few changes to the terms used to describe some of the functions, which do make sense but can be a little confusing at first.
To perform real-time time-stretching in previous versions, you had to go in to the audio Pool, define the tempo of your clip (if it wasn't already defined) and tick a box that put the clip into 'musical mode'. You can still work this way if you prefer, and it makes sense to do so if you wish to enter this information for several clips that you know to be of the same tempo. You could also access this via the rather fiddly toolbar of the Sample Editor — the logic, presumably, being that you'll want to be able to define the tempo when working with the audio clip in question. But from v4.1 access via the Sample Editor has changed considerably — and for the better.
When you double-click on an audio event to launch the Sample Editor, it will open just as it used to, but some of the buttons from previous versions are 'missing' from the top toolbar: you no longer have any controls there for activating musical mode, or Audio Warping, for example. This is because everything has been rationalised and moved to the left‑hand side of the Sample Editor window. In fact, you have access to far more processing options there than you had previously from the toolbar, including some of the processes that are accessible via the Audio menu, such as off‑line application of plug‑ins. Taking the Sample Editor menus from the top down, there are sections called Definition, Playback, Hitpoints, Range and Processes, which we'll look at in turn.
Sadly, what Steinberg haven't yet done is integrate any of these features into the Arrange page, as they did when they introduced edit-in-place MIDI, for example — and this is something that I'll explore more towards the end of this article.
The Definition section is, as the name suggests, where you define the tempo of the audio event that you've opened in the Sample Editor. If you're working with a fixed‑tempo loop that starts with a downbeat, this is incredibly simple: you make sure the time signature and bar length are correct (you may need to audition the loop to count the latter), then click on the crotchet Preview symbol so that it lights up orange and hit Auto Adjust. This should force the grid to match the tempo of the audio file. We'll come on to the time‑stretching functions themselves later: suffice to say that what we're doing here is telling Cubase where the bars and beats are in the audio file, in order that it can later know how to automatically stretch and align them. Think of it as metadata for the audio file.
Where you don't have a steady tempo, automatic time‑stretching and tempo definition can be something of a minefield, and this is where the Manual Adjust function comes in handy. Using this, you're able to drag the first beat of the grid (denoted by a green flag) to align it with the first downbeat of the audio clip. You then click on waveform at the first beat of the second bar (of the ruler) and a red flag appears, which you can drag to align with the first beat of the second bar of audio. You can perform the same task for any bar, but dragging any of these red flags will adjust the whole grid: it won't warp that bar alone.
To warp the grid so that the bars match an audio file's uneven tempo, you need to Alt‑click (Option‑click on Macs), which turns the flag pink. In this state, the flag can be dragged to the relevant position, stretching or shrinking the previous bar, and slipping all the following ones forward or backward accordingly. This enables you to define a variable tempo for the audio that the time‑stretching algorithms can reference. You can drill down further too: Control‑clicking will give you a blue flag, which allows you to stretch or shrink an individual beat on the grid, without affecting the position of the beats or bars to either side.
Once you've defined the tempo of the clip, you can snap the audio to the project tempo by clicking in the Playback tab. All you need do is ensure that the crotchet symbol is lit, indicating that the clip is in 'Straighten Up' mode — what was previously called 'Musical Mode'. Not only will the audio snap to the project tempo, but it will stretch and shrink as you change the tempo. As well as the obvious potential for remix fun, this is great for subtler tweaks, such as bumping up the tempo of a chorus by a couple of bpm using the global tempo track.
It's also in this section that you can determine which algorithm is used for real‑time time‑stretching. There are different ones for plucked instruments, percussion, vocals and so on — all of which are described in detail in the manual. Suffice it to say that which one you choose makes a significant difference to the results, so be sure to select the best one for each clip.
Beneath this section, you're able to alter the degree of quantisation and adjust the feel, using the Swing slider. This may seem like a straightforward function, but it's incredibly useful if you're trying to work with loops from disparate sources, or perhaps tweaking the groove in a remix You also get controls for fixed pitch‑shifting up to an octave either way, and locking to the new global Transpose track, so you can alter the pitch of the audio clip in real time without affecting the length. I've found this very useful for tasks like dropping or raising the pitch of a kick drum so it fits better with a bass part.
The sharper‑eyed amongst you may have noticed that I skipped a function in the Playback section: the Free Warp facility, which is arguably the most practical of the real-time processes found here. What it does is allow you to stretch or shrink sections of the audio file manually, without having to force it to the Project tempo. In other words, you can Warp the audio to fit a grid, rather than Warping the grid to fit the audio.And you don't need to define the audio files's tempo to do this.
The new operation manual goes into plenty of detail about using this feature to lock audio to tempo, but I find that its most useful function is the simplest. To create a Warp Tab, click and hold at any point in the file; drag the tab in the timeline to position it where you want on the audio file; then drag the tab on the waveform itself to warp the audio. It's easy this way to perform basic timing corrections — say, tweaking the timing of a double-tracked guitar. You just create three Warp Tabs: the first defines the beginning of the section that you want to be Warped (probably the end of the previous note); the second the time-critical point that you wish to move (the start of the note you want to move); and the third the end of the audio that you want to be processed (probably the start of the next note). Dragging the second of these Warp Tabs will bring the note into line, stretching and shrinking the audio in between the two other tabs. As you do this, you can also see the audio change in the Project window at the same time, although it helps to have plenty of screen space available to line things up by sight.
Where you need to perform many such Warps, it is often easier to generate Warp tabs for every note, in order that you can go through, tweaking the ones that require it. You could of course use the grid-warp approach described earlier, but you won't always want to quantise the whole file; sometimes it's nice to choose which timing imperfections to correct and which to leave for their, erm, endearing human quality. It's also helpful for dialogue editing, where you're not working to a musical tempo.
The Hitpoint editor is handy for a number of applications that we've covered many times before (such as creating REX‑style audio slices, or extracting groove templates) and it is equally useful when it comes to the Audio Warp process. Generating Hitpoints is pretty intuitive: you click on the Hitpoint section in the Sample Editor and adjust the Sensitivity slider to make the Hitpoints appear. Once you're broadly happy with their position, you can tweak or delete them, or add new ones by clicking in the timeline.
Personally, I've always found Hitpoint detection frustrating in Cubase, as the Sensitivity slider is rather fiddly. If you have a track with lots of leakage it can be difficult to set the threshold accurately. Where the leakage is dispensable, an alternative is to use the Detect Silence function to gate the part and split it into separate events. Using the glue tool, you can reassemble the part to make it the correct length (remember to draw in empty audio parts at the beginning and end of the loop using the pencil tool, so that the part is the same length as the original). Now, simply bounce the part to a new audio file, using the Audio/Bounce Selection command, which you can access by selecting the new part and right‑clicking. I offer this as an alternative approach because Detect Silence seems to be a more powerful, user-friendly tool for detecting transients (it would be great to see a similar user interface in the Hitpoint editor). You should find that Hitpoint detection works much better on the new part.
Whichever way you go about generating the Hitpoints, once you have them, you're able to use them to generate Warp Tabs. You can access this function via the Audio/Realtime Processing menu, as shown in the screenshot (below left), but for some reason there's no control for this in the Sample Editor: it would make sense to have a dedicated button for this alongside the Free Warp button.
This is all pretty straightforward where you're working with material with distinct transients, but not all sounds do. If you have something that has a slower build, with a time‑critical event part way through (such as a reversed cymbal hit, for example, where the end of the note is the crucial quantise point), you can define a Q‑point for an individual hitpoint, which will be referenced by any slicing or audio‑warp processing to ensure that the note plays in time. Q‑points aren't enabled by default (I've no idea why), but you can activate them, by navigating to the Editing/Audio page, and ticking the 'Hitpoints Have Q‑Points' option.
The two remaining sections in the Sample Editor are straightforward but useful additions. The Range section allows you to quickly and easily select a range within the audio clip — for example, selecting the area between the loop markers; or turning a selected range into a loop, which can be particularly useful when you're defining the part's tempo. The Processes section simply provides an convenient alternative means of accessing the range of off‑line processes from the main menu.
All of the processes that I've described so far combine to make the Sample Editor a powerful tool for audio manipulation of individual audio files. Treating audio in this 'elastic' way — both in terms of tempo and pitch — will be a boon to anyone who wants to manipulate loops. Combined with tools such as the new Global Tempo track, and the Arrange track (formerly called the Play Order track), you also have pretty much unlimited flexibility for remixing.
Bear in mind, though, that it's easy to get carried away. Whenever you process audio there will be at least some undesirable artifacts, and it's often a case of "less is more", unless you're stretching or pitch‑shifting as an effect. It isn't really feasible to transpose a bass line by an octave, for example, or to double the length of a loop.
It's also worth considering that the real‑time algorithms that we've been using here to Warp audio are of lower quality than Cubase's off‑line algorithms, but thankfully the Sample Editor includes a button that allows you to 'flatten' Warped audio. This uses the higher‑quality off‑line algorithm to perform the same processes as that you've already 'sketched out', so if there are only faintly detectable artifacts, you may find that you're able to get away with it through this flattening process. In any case, it would be worth doing this for each Warped track before bouncing your final mix.
I said above that the Sample Editor is a very powerful tool for manipulating individual audio files. I've also lamented the fact that you're not able to perform these operations directly in the Project window. Not only would that make simple processes easier to access, but it would allow you to line up different parts by sight, and to decide which to bring in to line with the other. Technically, this may not be as straightforward to implement as it sounds: on any given track you might be working with many different audio files, for example. But it can't be an insurmountable problem.
I first started examining the Audio Warp functionality when reader Sam Grant asked what Cubase offered that could compete with Pro Tools' 'Elastic Time' function. For those who aren't familiar with Elastic Time, it's a very powerful and intuitive means of manipulating the tempo and length of recorded audio which has the advantage that you can use it on multiple tracks simultaneously. So, for example, you're able to detect the transients in a multitrack drum session; then, by tweaking the timing of a transient on one track, you can move the other transients — while preserving the timing differences between them (so that you don't pull a carefully placed distant room mic to the same place as the close-miked snare, for example). Cubase is great when you want to quantise audio and to force it to tempo, but I couldn't find any way to manipulate multiple parts, nor to replicate the Elastic Time function; at least not without tedious manual editing. I'm told that it is something that Steinberg R&D are actively researching. Meanwhile, if you've figured a clever way around this, drop us a line and we'll spread the word!