Bass is the linchpin for so much of today's music, anchoring the other sounds and providing a foundation for the mix as a whole. But at the same time, it can seem maddeningly difficult to pin down and control. Help is at hand...
Bass instruments — whether acoustic, electric or electronic — are crucial to the majority of modern music. We all know we need to make the bass end work and, for most contemporary styles at least, that we need to make the bass part 'pump' and work with the drums to establish a compelling groove. To the beginner, though, getting things right on the bass end of the mix can seem a mysterious art, not to mention hugely frustrating. We can point an uncertain finger of blame: it's 'too muddy', 'too deep', 'too boxy', or it's 'ill-defined', 'too quiet' or 'doesn't punch through the mix' (and many more expletive-laden phrases besides). This article explores the theory behind some common problems, and suggests tips and techniques to overcome them.
'Bass' has several meanings — it is the name of an instrument (or, rather, several instruments), and of a drum. More generally, it refers to a portion of the frequency spectrum. So it's worth saying up front that where I'm discussing instruments, I'll name them to avoid confusion ('kick' for the drum, 'bass guitar' for, erm, the bass guitar, for example). When I'm talking about bass as a frequency range, I'm referring to the range from roughly 60Hz to 250Hz. Frequencies below that I'll call 'sub-bass', and higher than that, well, I'll make sure you know what I'm talking about...
As creative types, we like to think of ourselves as artists but, as with all things audio-related, a little bit of the science can help us. Sound is a mechanical vibration that produces waves, which travel (mostly) through air molecules. We detect these vibrations through our hearing apparatus (mainly our ears and brain). So there are three areas to consider here: how we generate vibrations that make good bass sounds; what happens to low-frequency sound waves before they reach us; and how we detect and interpret the vibrations.
We create bass sounds using acoustic instruments (for example, kick drum, piano, pipe organ, or double bass), electric instruments (bass guitar, electric double bass) that are then amplified, or electronic instruments (hardware synths and virtual instruments) that are then amplified. Drums, pipe organs, tubas and esoterica aside, most acoustic and electric bass instruments use strings. Even synthesizers owe a great debt to stringed instruments, as we'll see. So it is helpful for us to understand a little bit about how strings make sound.
Imagine an acoustic bass instrument, such as a double bass. The string is fixed at both its ends to the instrument, so when the string is plucked (or picked, bowed, pinched or slapped, as you prefer) it vibrates, and the body of the instrument resonates and amplifies the sound. The rate of vibration of the string is always divisible by the string's length, so in addition to its lowest (fundamental) note, it emits higher frequencies (upper partials) that are multiples of the fundamental — in other words, simple harmonics. So if you take a low 'A' note (110Hz), the simple harmonics are multiples of 110Hz, and an EQ boost at 220Hz (second harmonic), 330Hz (third harmonic), 440Hz (fourth harmonic), and so on should pick out more of the simple harmonics, which we will perceive as reinforcing the sound of the fundamental (we'll come on to perception later). There are other factors that affect the instrument's timbre — not least the materials and construction of the body, which affects the way the instrument resonates, and the amount of sustain, for example — but the basic theory is borne out in practice, which means that you need to pay attention to the harmonic content of the bass in your mix as well as to the fundamental frequencies.
Synthesizers draw on the same theory. The classic dance bass synth is the Roland TB303. It started life as an auto-accompaniment bass machine for guitarists, and evolved into one of the most influential bass synths in modern dance music. The synth itself was very simple — a single-oscillator affair offering only sawtooth or square waveforms. Those of you who noticed the lack of sine or triangle wave in the list can give yourselves a pat on the back: in fact, most bass synths start out with a harmonically rich source, such as the sawtooth or square wave in the TB303. The sawtooth wave contains all the integer harmonics (both odd and even), whereas the square-tooth wave contains all the odd integer harmonics. Conversely, triangle waves make a poor starting point for subtractive synthesis, as they are much less harmonically rich, and sine waves have no harmonics at all, so if you filter them, all you achieve is a reduction in level. To hear such sounds in a mix you'll need to raise the level, which means that you'll use up valuable mix headroom.
From this starting point, we can quickly develop a killer bass patch. Low-pass filters can be used to remove any unwanted higher harmonic information, while high-pass filters can be used to remove any excessive low frequencies that would otherwise eat up headroom unecessarily. The sound can be made more interesting by using an envelope to automatically bring the low-pass filter down. This can be subtle, or can be dramatic, as in the classic 'acid' sound of the TB303. You can make things still more complex and interesting by layering two sounds an octave apart. As the higher octave is a harmonic of the lower one, you'll have plenty of harmonic activity relating to the lower note. Using the envelope to bring the low-pass filter down more quickly on the higher note than the lower will produce a more convincingly 'real' sound, loosely resembling the dying away of the higher-frequency harmonics that occurs in stringed instruments.
Harmonic enhancers can be more effective than EQ in increasing the clarity and perceived level of your bass in the mix. There are some bass-specific harmonic synthesizers (which can be considered types of harmonic enhancer), such as Waves' Maxx Bass and Renaissance Bass, or Crysonic's newB, that have been designed for bass work, and produce great results. They generate new harmonic content and allow you to reduce or remove the original fundamental frequency content of the signal, which makes them particularly effective at adapting material with deep bass for use on limited-range systems such as televisions, and cheap consumer systems — such as those people use to listen to your Myspace tunes — which tend to roll off somewhere around 80Hz. They work because the brain tends to 'imagine' the missing fundamental if the upper harmonics are present, so these processors can create the impression of more bass while actually reducing the level of very low bass.
If you don't have a bass-specific enhancer, you can achieve a similar effect by sending the signal to an aux channel with a more conventional enhancer inserted on it. You then place a filter over the source channel to remove the unwanted low frequencies. However, on bass guitars, simulated amp distortion produces a more natural-sounding result.
It is worth noting that this technique does not give the same result as a full-range system — we cannot create the chest-slapping feel of powerful sub-bass in this way, for example — but it is an effective and worthwhile deception nonetheless.
Science can also help us understand why monitoring problems are particularly acute with bass frequencies. Poor low-frequency acoustics in the home studio are one of the key causes of poor bass sound in mixes, because what sounds great in an ill-treated home studio isn't an accurate representation of the sound that is actually being generated. It is not uncommon in such a studio to find dips of 35dB at more than one frequency in the bass end — and no matter how good the equipment you are using, you'll not get anything to sound good if this is the case. Similar considerations apply to the recording environment: you need to be very careful about what the musician hears and about microphone placement (if the mic is slap in the middle of a dip at 80Hz then you're in trouble!).
A common misconception is that using more dense materials in walls, ceilings and floors will help. The key source of problems is standing waves, which result from uncontrolled reflections. The more dense the material, the less sound passes through and the more is reflected, and although bass waves will travel more effectively through such material than higher-frequency waves, what doesn't pass through will still be reflected.
It is also important to consider your monitoring system. A full-range 2.1 system can sound great, and generate deep bass frequencies, but you need to bear in mind the audience for your music. Much modern music is consumed via laptops, iPods, TV and radio, where you won't be able to hear all that bass, and a more restricted system might be more appropriate for testing. Conversely, if you are targeting your music at dance clubs, with powerful full-range systems, then your mixes might sound wimpy if you don't test them on a bigger system, in a bigger space. Using a club system should be ideal, but even then, you need to bear in mind that it will sound different when the club is empty.
Hugh Robjohns' article on subwoofers explores these issues in more detail, so I won't dwell on them here. However, I can't stress enough the importance of good acoustic treatment, and more specifically of good bass trapping in your studio. This is one of the areas that still sets 'pro' studios apart from the rest, and if you don't get this right, then your recording and your mixing will suffer.
If things weren't complicated enough, we then have to throw into the equation the fact that we hear bass differently from other frequencies: welcome to the world of psychoacoustics.
Assuming that we have a well-treated room, we can see using a spectrum analyser how much headroom the bass frequencies are taking up. While this gives an accurate picture of what is actually happening, it does not reflect what we perceive to be happening — and there can be a world of difference. The brain is a complex beast, and when it comes to hearing, it is pre-programmed to 'translate' sounds in a certain way.
First, our own 'frequency response' is not flat. We perceive low frequencies and high frequencies as being quieter than the mid-range. This already complex frequency response also varies according to loudness (the actual response to different frequencies at different SPLs is illustrated in the well-known Fletcher-Munson Curves, or Equal Loudness Contours). Because the mid-range seems louder, we perceive more detail there than we do in anything occupying the bottom end. If you're having difficulty following this concept, imagine a bass guitar part doubling a guitar part: if both are played at the same level but the bass plays an octave or two below the guitar, the guitar will seem to be louder.
There is a similar situation with our perception of pitch. As the sound pressure level increases, we perceive a slight drop in pitch that varies with frequency: the higher the level, and the lower the frequency of a sound, the greater the perceived drop in pitch. So, even setting aside health issues, monitoring too loud can distort our perception of what's in tune and what isn't. In a large room, a monitoring level of around 85dB SPL is accepted as one that will allow you to translate your mixes acceptably to a good range of systems — but this will still be too loud for a typical small home studio, where something closer to 79dB SPL is likely to be better.
There are other interesting psychoacoustic phenomena that can help us with our bass end. The perception of loudness takes into account both the amplitude and duration of a sound, which is why short percussive sounds don't sound as loud as sustained sounds of the same peak level. In other words, you may be able to make a bass sound appear louder by lengthening the notes, though there quickly comes a point where this offers no further gain.
We've all experienced optical illusions, and our hearing can be similarly tricked. Yamaha's research in the '80s revealed we get our strongest impression of a sound's timbre from its attack portion. This is important for our bass for two reasons. First, by layering more percussive sounds on top of the deeper bass, we can create the impression of a sharper attack (the classic example would be a nice, slapping kick drum, layered with a deeper, longer bass sound). Secondly, when applying compression, it is crucial that the attack portion of the bass or kick is not squashed beyond recognition.
One of the most interesting tricks we can use is to generate harmonic content but remove the fundamental note. If we can hear all the harmonics present, then we perceive the fundamental to be there — no matter whether it is actually there or not. This principle is put excellently into practice by processors such as Waves' Maxx Bass and Renaissance Bas s plug-ins (see box), and is particularly effective when translating full-range mixes to speaker or headphone systems such as you might have with your iPod, television or computer's built-in speakers.
Our locational perception is also different for bass. We can detect the location of the source of mid- and high-frequency sounds through the difference in intensity between the sound at each ear; the head casts an acoustic 'shadow' over the ear farthest from the sound. For bass frequencies, by contrast, we sense location from the slight time difference in the wave hitting each ear. Hugh's subwoofer article is worth a read if you want to know more about this.
Finally, I should mention the masking effect. Where two similar sounds occupy the same frequency range, the louder one will tend to mask the quieter one. This is one reason why things can get so 'muddy' — particularly with heavy guitar tracks, where the low end of the guitars often competes with the upper frequencies of the bass and kick. This is another reason why it is important to choose kick and bass sounds that complement, rather than compete with each other.
If you want to get your bass to really pump with the kick drum, side-chaining can help you. Side-chain compression was covered in more depth in Paul White's article in SOS November 2006, but you can also use gates to good effect. Cubase users have long griped about the sequencer's lack of in-built side-chaining support, but there is a workaround to that, so I'll use Cubase here as an example.
First, create a stereo Group channel (or, if you prefer, a stereo FX channel) and insert a noise gate — the one that comes with Cubase 4 will work just fine, or you can use the Dynamics plug-in on earlier versions. Now, you need to route the bass and kick signals to different sides of the Group: select the Group as a send on the bass and kick channels. You need to use the routing view of the send so that you can pan the send of the kick extreme left, and the bass extreme right. Now, either turn down the level of the bass send so it is barely audible, or mute the send, as we need to focus on the kick first of all. Set the gate's attack and release times to very low values (so it responds faster to the input signal) and lower the threshold until the kick is triggering the gate. If you turn the bass send back on, you'll notice the bass is pulsing in the right speaker in time with the kick in the left. You need then to get rid of the kick and to pan the bass centrally. To do this, insert an imaging plug-in — the screenshots show how to do this with MDA's freeware Image (www.mda-vst.com). Mode, S-Pan and Output sliders are set far right, so that you're only listening to the bass, now in the centre. The gate is still being triggered by the kick signal. All that remains is to fine-tune the release time of the gate and balance the signal with the original bass using the main faders. If your sequencer has dedicated side—chain capabilities, you can achieve this without the need for the imaging plug-in.
OK, enough science, let's move on to matters practical. Given that the harmonic content of bass and the attack portion of the envelope are so critical to our perception of the sound, it is important to make sure we capture these along with the deep bass — we can always remove what we don't need using filtering at the mix stage. It should go without saying that you'll get the best results from a good player, playing a good instrument, in a good room! For bass guitar, it's also worth making sure that the strings are reasonably new: the older and more grease-laden they are, the duller they'll sound. Some people deliberately use old strings for this reason, but most prefer the brighter sound of new strings. And always (I mean always!) make sure the bassist has tuned the bass before you start recording.
The DI signal from a bass guitar can sound a bit brittle, and on its own can lack 'oomph', but if nothing else, it offers you an insurance policy, as you can re-amp the signal, or run it through an amp modelling processor later on if needs be. Some DI boxes, such as the Sansamp Bass Driver, include amp and/or speaker simulators, which warm the sound and roll off the top end a bit. They do make the sound nicer, but while I'd happily use these for live work, personally I'd rather keep the DI clean as it affords greater flexibility at the mix stage. The DI signal can also be particularly useful when taken alongside a miked amp signal, as you can be sure you've caught the very deep end of the bass that some amps will not give you.
Bass amp and speaker cabinet modelling has come on dramatically in recent years. The popular Line 6 Bass Pod was one of the first convincing units to offer the enable you to simply 'dial up' a classic tone, but modelling is no longer the preserve of hardware. Native Instruments' Guitar Rig 2 is as comfortable working with bass guitars as any other sort (even if the options are a little more limited), but more recently, IK Multimedia's Ampeg SVX (reviewed in SOS November 2006) has really upped the stakes. In fact, the results for some styles are arguably as good as a top-notch recording (and much more convenient). Such modelling processors are, of course, great on bass guitar, but it is also worth thinking about experimenting with them on software synths. The classic sound of many hardware bass synths results in part from analogue distortion of a similar nature to that produced by an amp, and speaker modelling will roll off some of the higher harmonics that can clash with other sounds in the mix.
Sometimes, though, you just can't beat the sound of real moving air. If you have a good bassist who knows how to get a good sound out of their instrument and their amp, then it's worth having a go at recording it in the good old-fashioned way: with microphones.
Some mics are intended specifically for kick drums and bass instruments. These include the AKG D12 and D112, the Audix D6 and Shure's Beta 52A. As with other good studio workhorse dynamic mics such as the Shure SM57, the Electrovoice RE20 and Sennheiser MD421, these are designed to withstand high sound pressure levels, and from this point of view they are ideal for bass applications. They also have a frequency response tailored for typical bass sounds, with good low-frequency response, a slightly scooped mid and a peak somewhere around the 3-4kHz area, designed to bring out more of the attack. Such mics can make an excellent choice, but, given the far from flat frequency response, they impose quite a strong character of their own, so although they'll work well for some sounds, they're unlikely to be the best choice on every occasion.
It is worth considering a more neutral mic, with a reasonably flat frequency response, as this will better capture the sound in the recording room. Condenser mics are the best choice here. Of course, you don't want to be putting your most sensitive mic right up near the grille (or inside a kick drum for that matter). However, there are many good FET condensers that will do a good job, such as the Neumann U87 and U47, or the more modestly priced TLM103, for example. The AKG C414 is another popular choice and there are many others from different manufacturers, so it is worth trying different mics out if you get the opportunity.
Mic polar patterns are an important consideration too (see Paul White's article in SOS March 2007). With a cardioid mic, for example, you can use the promiximity effect to add more warmth to the sound. If you need to achieve separation from other instruments, a figure-of-eight mic can an excellent choice under some circumstances, while an omni will give you more natural-sounding results, which can be particularly nice on acoustic instruments. On a bass cabinet, mic positioning is also important: as with guitar amps, positioning the mic away from the centre gives a warmer tone than one pointing at the centre.
Some engineers swear that they get the best sound by combining the signals of different microphones and DIs. For example, you could try a combination of DI, a 'kick' mic close to your bass cabinet, and a good condenser a little further away. Or you could try two close mics, one pointing at the centre of your bass cab, the other towards the edge, so you capture more of the sound of the amp. Another trick is to use something like an SM58 a couple of feet from the cab in conjunction with a closer dynamic mic. Compressing the SM58 signal and balancing that sound with the signal from the kick mic can help to give things some edge. If using multiple mics, it is worth taking the time to get the phase relationships sorted while recording.
Getting a good recording is one thing, but making it work with the rest of the mix is a rather different pot of poissons. Some producers start a mix with the 'feature' instrument (such as lead vocals), but it can make sense to start by getting a good balance and groove going between the kick, bass and snare, as this provides a solid foundation for the rest of the track. Be aware, though, that your perception of what works will be very different when parts are soloed than in the full mix, and you'll almost certainly need to revisit things later.
The average listener will focus mostly on musical performance, so if the timing and tuning are all over the place, it's all for nothing. If you've programmed things in, that's fine — you'll have had plenty of control. But if the parts were played in, then there's probably some tidying up to do. It's a little more complicated than simply quanitising everything. The key is to get things to work together, and it's no good having your bass notes working to a metronome if the drummer drifted away from it. Though there are some automated ways to do this sort of thing, I don't find they save time, as you need to go through to check the results, and I still find that the best way is to go in and adjust the offending notes manually. In some cases, time-stretching notes (or replacing them with the same note from another take) so that the note length better fits the groove can work well too.
There are few 'rules' in music production, but panning bass isn't far off. It is usually a good idea to pan the bass and kick to the centre. Partly this is historical (the limitations of vinyl) but, more importantly, it shares the bass energy equally between the two stereo speakers. It is also important because the listener will not always be in the sweet spot, and given that the bass is so critical to the mix, you want them to hear it wherever they are in relation to the speakers (this applies to dance music as much as any other — you want all the clubbers to feel the same bass groove).
Bass is usually more heavily compressed or limited than other sounds. This irons out peaks, and helps the groove to feel solid, and to underpin the rest of the mix. The attack and release settings in particular are critical. Too short an attack, and you'll squash the important attack phase of the note. Too long a release time and you'll ruin the groove. If you let the attack phase of the note through, then it's also a good idea to place a limiter after the compressor, in order to catch any wild peaks, and leave you more room for make-up gain so you can increase the level without peaking.
A common trick to increase the impact of the bass is to send the kick and bass to the same compressor and bring the compressed signal back in quite low, just to glue things together.
Compression will tend to emphasise the predominant tone of whatever is being compressed, so it makes sense to place an EQ before the compressor, to shape the sound that you want to emphasise. You can always place another one after the compressor too, so that you can sculpt things to better fit the mix.
As most consumer systems start to roll off around 80Hz, you can tailor your sound to them by placing a sharp high-pass filter at about 50Hz and applying a gentle boost around 80Hz. This won't be good for club systems, of course!
It is worth listening to the kick and bass parts at the same time when you are EQ'ing, as you need them to work together. If you find that they are competing, you can EQ them around eachother. A slight, narrow peak in one and a correlating dip on the other can help to achieve this.
If your bass sounded fine on its own, but lost clarity and energy when you finished laying down your umpteenth vocal or guitar overdub, then it is worth looking at the other sounds. High-pass filtering your guitars, vocals and other instruments in the low-mid range can help you get back the space, and avoid a nightmare muddy quagmire. You might be surprised just how high you can set a high-pass filter on guitars (particularly acoustics) and get away with it. The results may sound horrid in isolation, but as long as they work with the rest of the mix, it's not a problem. The fewer instruments that are competing in the same frequency range with the bass, the clearer and tighter your bass will sound. The same applies to panning: given that you probably already have bass, kick, snare, hi-hat and vocal somewhere down the middle, trying to pan other things out a little can help to leave space for the upper reaches of your bass instruments.
If you still find that your bass isn't cutting through, it may be down to a lack of harmonics, or the slappier attack part of the sound, as discussed earlier. If EQ isn't bringing things out, an enhancer may be the perfect tool for the job (see box), while you can emphasise the attack using a hardware processor such as SPL's Transient Designer, or a software equivalent such as Waves' Trans X, Digital Fishphones' Dominion, or the Envelope Shaper that is bundled with Cubase 4. Alternatively, you could try adding a little distortion. A common trick is to use distortion as a send effect, mixing the distorted sound back in at a low level. Tube amps (or their software equivalents) are perfect for this sort of thing.
Effects can, of course, add interest to your bass part. But they can also be effective in making it more audible in your mix. The best sort of effects, other than the usual distortion or fuzz, are modulation effects that make the sound sweep. You don't have to go crazy, but some subtle flanging, phasing or wah can work wonders.
Probably the trickiest effects for bass are delay and reverb. You want to hear each bass note individually so, given the masking effect, it is usually not a good idea to use delays or long reverbs that merge into the main sound. Where I've used delays, it's been to create more of an arpeggiator effect, adding whole notes in between spaced ones, rather than low, continuing repeats. However, a very short slapback can help to locate things. For reverb, try to keep things short. You might also find that a little pre-delay can help to separate out the reverb from the source signal, which can improve clarity.
Synths can easily produce very low pitches, whereas the bass guitar can only go so low. You can of course choose a bass that has a good low sound (Musicman basses, for example, are noted for this), and play differently to get the most from the lower notes (for example, playing further from the bridge), but sometimes it just won't seem low enough, particularly if it is competing for your audience's attention with deep synth basses on other tracks. So how do you get the same depth and power from a bass guitar without sacrificing too much of the tone?
Well, just as you can generate higher-frequency harmonics, so you can generate lower-frequency information that's related to the source material. One tool you can use is the octave divider, which creates a signal an octave (or more) below the source, calculated, as the name implies, by dividing the frequency in two. While it can be an interesting effect, the result is a little crude and quite distinctive. A better result can be obtained using sub-synth. These devices are, in effect, gates that trigger a low-frequency synth — you set the threshold and select the trigger frequency range and the synth auto-accompanies the source. It won't be the same tone as the original bass part, but at this level, it really doesn't matter, and the tone of the original bass part remains pretty much intact. There are a few such plug-ins available: one comes bundled with Logic Pro, for example, and there's also a freeware one for Mac and PC from MDA. My current favourite is Lowender, by ReFuse, which runs on the Pluggo platform (unfortunately, it is Mac-only, but a PC version is in the pipeline).
I hope this article will dispel some myths and help you sort out any problems you have. The bottom line is that you need to think as hard about your bass as you do about the rest of your mix. .