As well as having all-round audio engineering ability, a remixer needs to be proficient in remix-specific processes. We explore the essential software and skills.
In the two previous articles in this series, I looked at what remixes are, why they exist, and some of the creative decisions facing remixers. Now it's time to move on to some technical aspects of remixing.
You will, of course, use plenty of generic mixing techniques (compression, EQ, delays, fader‑riding/automation and so on), but there are some techniques that you'll use more regularly than a typical mix engineer, because you need to change melodies, chord structures and tempos when working with pre‑recorded material. The most obvious processes that you need to master are:
- Pitch and melodic 'correction'
In this final article, I'll run through some of the remix‑specific aspects of these.
Before the advent of digital audio, people could only 'time‑stretch' audio in a very rudimentary way, and with significant drawbacks. In analogue recording, the pitch and the tempo of a recording are inextricably linked: while it's possible to change the pitch and the tempo of a recording, changing one inevitably changes the other.
As a remixer, you want to be able to change the two independently and, fortunately, digital time‑stretching — which now comes as part of pretty much all modern DAWs, as well as in dedicated form — allows us to do this. The quality of many of the latest systems is acceptable for most situations a remixer will encounter, although there are exceptions. I'll consider one of those specifically later on, but in the meantime, let's assume you're not being asked to stretch the audio too much. One last thing: do remember to always work on a copy of the original file!
A Better Stretch: One thing you learn very quickly is that some source material time‑stretches well, and some doesn't. You'll almost certainly get far better, more natural‑sounding results if you have all of the parts you want to time‑stretch separated into individual tracks. However, there will be times that you need to time‑stretch entire sections of stereo tracks, and while this is certainly possible, you shouldn't expect miracles. The more harmonically and rhythmically complex the source material is, the less likely it is for the stretched version to sound natural. Low‑frequency signals have a tendency to get a little 'wobbly' when you time‑stretch them, and while kick drums sometimes work OK (because they're short and percussive), sustained bass notes can suffer very badly.
Audio Algorithms: Not all time‑stretching algorithms >work equally well on all sources, either. Some work better with rhythmic material, some with melodic, others on monophonic sources. Many DAWs now come loaded with several algorithms, each intended for different material. These offer a good starting point, but if you're not getting the right results, experiment: get to know your tools and use what works.
Vital Vocals: In a remix, vocals are critical; the ear and brain tend to focus on them, and because they're often high in the mix and very clear, undesirable artifacts are often clearly audible. It's thus important to use the best algorithm you have available for time‑stretching vocals. Don't be afraid to spend the time trying out different algorithms, different settings, or even different software altogether.
Personally, I use iZotope Radius for 95 percent of my time‑stretching needs. This actually works as an add‑on inside Logic's 'Time & Pitch Machine', rather than as an independent application — which makes it convenient — and while it has only two algorithms, Solo and Mix, the quality of the results sells it to me. Most dedicated time‑stretching software interfaces work in a similar way, allowing you to enter the source and destination as bpm, timecode, or bar/beat/sub‑measure values. For the most part, the bpm values are the most relevant to a remixer, although the others can become useful when matching audio‑visual material to a specific length for movies, adverts and TV. There are ever more complex time-stretching tools appearing all the time, such as Elastic Time in Pro Tools, and Audio Warp in Cubase, both of which enable you to 'warp' audio that perhaps wasn't recorded to a click, so that it fits the grid. Such tools can be very useful in their own right, but hey're not always the best for a simple stretching job.
Whatever software you use, once you start experimenting you'll discover that there are limits to what it can achieve. In general, once you start approaching the threshold of 'acceptable quality' from the software, you'll probably be approaching the limits of what sounds 'natural' from the audio file anyway. For example, if you were trying to time‑stretch a guitar part from 90bpm to 130bpm, you'd be pushing the limits of what the software was capable of — but you would probably also notice that the actual 'playing' of the guitar stopped sounding natural.
The same is true of vocals, which will start to sound unnatural way before you reach the above-mentioned degree of tempo change. Anything up to a 10 percent increase tends to pose no problems, although the phrasing might sound a bit odd if it was quite a 'pacy' vocal to start with, or if it was quite laid back to start with and you had to slow it down.
In general, the further you move from the original tempo of the vocal, the more problems you will have with it, and since record companies have a habit of pushing remixers to (and sometimes beyond) their limits, we need the ability to get seriously creative. Every remixer will, at some time, face a track whose original tempo causes problems. Many Soul Seekerz remixes I've worked on have had a 'destination' tempo of around 128bpm. For these the 'danger' tempo of the original track is 96bpm, because to get the original parts in time with your remix you can time‑stretch up or down, but the stretch is the same percentage. You have to go from 100 percent to either 133.33 or 66.66 percent to make it fit. Neither is going to sound great!
It's a tough judgement call, and will fundamentally shape the sound of your remix. I personally would tend to try to stretch the vocal up to 128bpm, because dance music is, after all, about dancing: slowing a vocal down will, perhaps more than anything else, sap the energy you have worked so hard to create.
When time-stretching vocals, you'll notice vibrato more than anything else. I've not yet discovered software that can overcome this problem automatically, so I regularly use two manual techniques. One is quicker but sounds less natural; the other is more long‑winded, but ultimately sounds better. I usually use a combination of the two.
The Melodyne Method: The easier option is to use Melodyne, which, in plug‑in or stand-alone form, is spectacularly useful. This software has an option to adjust the level of pitch variation (vibrato) on any note, which allows us to take any particularly noticeable notes and reduce the vibrato depth. It really is that simple! As in any area of music production, listen to the part in context, because if you go too far and flatten the vibrato completely, you'll be left with that 'Cher' effect. I find that the best way is simply to loop a selection, play the track, and adjust the notes as it loops, gradually reducing the vibrato amount until it sounds acceptable.
The Cut & Paste Method: If you find you have to reduce the vibrato to nothing in order to get rid of the vibrato enough (and you don't want that Cher effect), move on to the second option. There's nothing especially magical about this: it's just a case of cutting and pasting and applying judicious volume fades. If you have a sustained note with excessive vibrato (which is where you'll notice it the most), simply:
- Cut the time‑stretched vocal at the point where the vibrato begins.
- Cut the corresponding section from the un‑stretched version and paste it onto the stretched version.
- Cross‑fade the join between the two to make it smooth. You might need to fade out the end of the unstretched note at the point where the stretched version finished.
If the stretch is quite extreme, you might need to actually cut a section out of the sustained, unstretched version in order to make it finish 'naturally', but this technique will still give better results than any other method I've yet discovered. In fact, I'd use it exclusively if it didn't take so long to go through every single line of multiple vocal takes. But deadlines aren't always what we'd like them to be, so you might have to 'cut corners'. Just make sure you don't set the bar too low with the vocals. (Have I made that clear enough yet?)
With everything at the right tempo, I'll move on to the next steps: pitch correction and pitch manipulation (which are two distinct processes). Sometimes when you get the vocals — and I'm concentrating on vocals here, as I've rarely found a need to do this for other musical parts, although the same theories should apply — they're the final, compiled, compressed, EQ'ed and tuned vocals, and other times they're the 'raw' files. Ideally you want to work with the final files, because then you know that what you put in your remix is what went into the original song. Also, if they're already good 'pop' vocals, recorded and produced in studios with the creamy, vintage analogue toys we all lust after, why try to better them? You won't always get the option, however, and sometimes the raw vocals are less than perfectly tuned. (It does happen, and sometimes with artists that you might not expect!) You just have to put on your professional hat and get to work.
There are many different tuning products, AutoTune, Waves Tune and Melodyne being some of the more popular ones, and some DAWs now come with such processing built in. My preference is, once again, for Melodyne, because it suits my workflow and gives me quality results. The automatic type of pitch‑correction sounds artificial by comparison. You need to make your remixes stand out, and attention to this sort of detail is a good way to make that happen.
For those unfamiliar with Melodyne (the plug‑in version in this case), the basic principle is that you 'record' the audio into the plug‑in, where it's processed off‑line before showing up on a familiar piano‑roll grid, just like MIDI data. Each syllable/note shows up as what is endearingly known by some as a 'blob', its height representing amplitude (volume), and its length representing duration. You can change many things about these blobs: amplitude, pitch centre, pitch variation, timing, pitch transition‑rate... But this isn't a review, and here we're interested only in pitch. The centre of the blob represents the note's centre frequency, and you'll be able to see easily when a note is way out of tune, as, in theory, the centre of the blob should be lined up with the centre of the horizontal bar that represents each note. The notes that are most 'wrong' are the ones that we generally want to correct. You just select the right tool, grab the blob, and move it back towards the centre line. A graphical readout tells you the 'root' note, and how many cents above or below that perfect centre the audio currently is.
Dragging everything to '100 percent accurate' doesn't always sound best. For the same reason that pianos are not actually tuned technically perfectly, vocals that are tuned mathematically perfectly often lack 'vibe'. So focus on sorting out only the problem notes — and in context, so that you don't over-correct.
Once correction is dealt with, you can consider the more creative uses of Melodyne. I mentioned in a previous article that you can alter the context of a remix by changing from a predominantly major chord sequence to one that's predominantly minor, or vice versa, and if you're simply switching from major to relative minor chords, you shouldn't have any problems with clashing vocal melodies. However, the change isn't always that simple.
For most melodies (disregarding harmonies for now), a large number of chord sequences will work. However, as dance music is essentially very pattern based, there might come a time when you have a repeating four-, eight- or 16-bar sequence that works with everything except one or two notes of the vocal melody (perhaps where the singer has introduced variation). You could develop the chords further, to find a repeating sequence that works with the whole melody, but that sometimes seems a shame when you have a groove that otherwise works perfectly, so another option is to change the chords only at that point. This kind of approach is becoming more acceptable in the commercial side of club music, although it's preferable, for the most part, to retain some continuity.
If neither of the above is an option, a third possibility (which comes with a big caveat) is to use Melodyne to change the vocal melody to fit the chord sequence. As long as you're only moving notes a semitone or two, which would normally be enough for our purposes, the quality of pitch-shifting should be fine in context of the mix. But many artists and songwriters are protective of their work, so you might incur their wrath. If you have the time, you can always approach the label, who could possibly approach the artist/writer to get their approval, but normally there won't be time, so you have to use your judgement. If you're a writer yourself, you may have a better sense of what might be acceptable to another writer, but generally, the bigger the artist, the more likely you are to face opposition. You can take the risk that nobody will notice (as I've done in the past), and mostly they won't, but if they do, and they don't like it, be prepared for the possibility that they'll be annoyed and demand you change it back. So be respectful with this technique, and if it backfires, you've been warned!
Propellerheads' ground‑breaking Recycle software brought beat‑slicing to the masses, and similar functionality is now available in most DAWs. It's a phenomenally useful way of making sampled loops work at different tempos, of course, but as with most techniques, it also has more abstract potential. By feeding vocal recordings in, for example, and setting markers carefully, you can slice each phrase into individual words, and perhaps even sub‑divide words that flow over a few notes in a legato way. Once you have these individual vocal 'slices', you have various options:
Quantise: You could quantise the vocals, of course, but as they don't usually have clear transients, the results are unpredictable.
New Tunes From Old: More interestingly, you can use slicing to make new melodic phrases. In more than one remix, I've taken random vocal slices and moved them around to make part‑rhythmic, part‑melodic hooks. In one case (a remix of Robyn 'Handle Me'), some people even asked where that vocal part was in the original version, as they couldn't hear it! It can be something totally random, as in this case, or a way of taking words from different parts of the song to construct new phrases. Again, be careful not to take things out of context and annoy the artist or writer; that just isn't respectful. But it still gives you a lot of freedom to get creative.
DAW Slicing: If you prefer, you can simply slice the audio manually in your DAW's arrange page (or equivalent), and with vocals this can actually be more intuitive, because you can hear the them in context and just cut them where it sounds right. It's a bit of an old‑school approach (imagine you're cutting and splicing tape) but it doesn't take long once you get used to it.
Instrument Parts: Sometimes you can also use this process to work on other musical parts, but it is more difficult. A recorded guitar part, for example, is much harder to use with an alternate chord structure than a vocal. You might be able to use different chords from different places, perhaps putting a G‑minor chord from the original parts over an E-flat major in your new chords, for example, but with the current state of technology, that's about it. However, things may be about to change with the promise of so-called Melodyne DNA, which you can read about in the box below.
Of course, there's a lot more to making a remix than I've covered here: as I said at the outset, there's plenty of scope for creativity using more traditional mixing processes and effects, and other articles in this magazine will help you to learn about them. I hope, in the space of these three articles, that I've been able to give you a bit of an insight into some of the more specific techniques and situations that you'll face as a remixer rather than a 'conventional' producer. I'll leave you with a reminder of some of the key points to remember from this series.
- Be professional: First, in remixing, perhaps more than any other area of the music industry, it isn't only you that needs creative satisfaction. You need to be professional and courteous, and remember that, ultimately you're working with somebody else's songs and you need to respect that.
- Be yourself: Second, try to establish your own identity, and not simply follow the rollercoaster of whatever's fashionable at the time — because although it can work out, ultimately you'll spend most of the time chasing your tail, and those who set the trends will already have moved on by the time you've spotted the trend.
- Be adventurous: Finally, it's important to remember that everything I've said is merely a guideline. Now, more than ever, people expect you to break rules. You'll inevitably get frustrated at times, simply because not everything's possible. It's your job to prove that plenty is possible. Good luck, and I wish you every success with your remixes!
For budding remixers who would like to be able to to tweak chordal parts. rather than only monophonic audio, there's a technological glimmer of hope on the horizon. If you haven't already heard about Melodyne Direct Note Access (DNA), this forthcoming version of Melodyne promises to work on polyphonic files — which is something like the holy grail of audio manipulation.
DNA hasn't yet been released, so there is still debate raging on-line about whether or not it will do what it promises, but there have been some impressive demonstrations at trade shows, and in on-line videos, that show that the technology is impressive on at least some material. I'll reserve my judgement until it's finally released, but if it delivers the goods, the remix applications will be mind‑blowing: you'll be able to take guitar or keyboard audio parts and, essentially, treat them like MIDI parts of your own, retuning each note to fit whatever chords you want in your remix.
Sometimes remixers are sent a stereo stem of all of the backing vocals and harmonies mixed together, or just an 'acapella', with everything mixed together. At present this is very limiting, and record labels don't always have access to the original parts, or, if they do, they won't be able to get them in time for the remixer to meet the deadline. Again, DNA promises a remedy to this problem, but until DNA or similar technology becomes available, this is one thing we can't do anything about: we just have to work with what we've got, and be as creative as we can within those limits.
Simon Langford is a professional songwriter, producer and remixer who, as part of Soul Seekerz, has worked for some of the biggest names in pop music, including Robbie Williams, Rihanna, Sugababes, The Ting Tings and many more.