The right EQ settings can make a mix — and the wrong ones can make a mess. Our in‑depth guide will help you get more from this all‑important tool.
Record sound into a DAW, and it’s represented visually as a waveform. Zoom in, and you’ll notice repetition. Peaks and troughs of similar size and shape follow each other at regular intervals. Their spacing reflects the fundamental pitch or frequency of the note. Their height is known as the amplitude of the waveform, and the shape of each peak and trough reflects the timbre of the source.
The simplest waveform is a sine wave, which looks like a series of half‑ellipses. One of the basic facts that makes digital audio processing possible is that all other repeating waveforms can be described as, or reduced to, combinations of sine waves at different frequencies and amplitudes. In a pitched sound, the sine wave with the lowest frequency is known as the fundamental; the others are mathematically related to this and are known as the harmonics or overtones.
A recording of a typical piece of music will contain multiple fundamental notes and harmonics, all playing at once. The bass instruments might sound fundamentals as low as 30 or 40 cycles per second (Hertz), whilst the upper harmonics of the high‑pitched instruments will be well above 10,000Hz (10 kilohertz). Inasmuch as it consists of sustained, pitched material, what we hear as a single piece of music can always be broken down into lots and lots of different sine waves at different amplitudes and frequencies. That goes for full mixes just as much as it does for individual instruments.
When a recording fails to be true to the source, one common reason is that the balance between all of these different sine waves has been altered. A good example of this is the proximity effect. When placed close to the instrument they are recording, directional mics exaggerate the amplitude of the lowest sine waves. This can manifest itself as a muddy or boomy sound. There are many other ways in which the balance of frequencies within a recording can become skewed, either because of circumstances or the limitations of equipment. The idea of correcting — or ‘equalising’ — this imbalance through electronic means has been around almost as long as electrical recording itself. The term has remained with us, usually abbreviated to EQ, but the process itself has become a routine part of recording and mixing.
Nowadays, we don’t only use EQ to match the sound of a recording to that of the original source, but for many other reasons. Equalisers are everywhere: in the input channels of our mixing consoles, in dedicated outboard units and, above all, in software plug‑ins. They are applied to individual mics and instruments, to group busses, to the master bus and even to the monitor path to correct the failings of our loudspeakers and headphones. Equalisation is a transformative rather than an additive process, and as such, an EQ is almost always used as an insert, not on an auxiliary send.
Equalisation is a transformative rather than an additive process, and as such, an EQ is almost always used as an insert, not on an auxiliary send.
Another way of representing sound visually is a spectrogram: a two‑dimensional graph with amplitude on the vertical axis and frequency along the horizontal. Unlike a waveform display, a conventional spectrogram doesn’t have a time axis. It shows a single balance of frequencies, averaged over a time period. However, if we keep this time period short, we can continually update the spectrogram so that it follows the evolving content of the audio. This sort of animated spectrogram is now the standard form of visual feedback in many software equaliser plug‑ins, and is extremely helpful for understanding the way in which EQ acts on the sound.
We can display the action of the equaliser as a line running horizontally across the spectrogram. With the EQ doing nothing, this line will be flat, indicating that it’s not altering the amplitude of any frequencies within the overall sound. Where the line is not flat, it raises the amplitude of frequencies in areas where it’s above the flat, and reduces them where it’s below. On some software equalisers, the line itself is a control. The response of the equaliser can be set simply by clicking and dragging to change its shape.
Try this, though, and you’ll notice that there are constraints on the types of shape that can be created. The line cannot contain sharp corners, or vertical sections. The shapes you can create are invariably curved and, unless they go right off the sides or bottom of the graph, they’re usually symmetrical. When you click and drag in this style of EQ to change the shape of the curve, you are adjusting what in a more traditional EQ would be called a band. Modern software EQs can have an arbitrarily large number of bands, but in an analogue device, each band requires its own dedicated circuit. Analogue equalisers thus have a fixed number of EQ bands; and often, each band is of a fixed type. The three main types of EQ band used in mixing are filters, shelving bands and bell or parametric bands.
The simplest of these is the filter. Create a high‑ or low‑pass filter band on an EQ with the sort of graphical display described above, and you’ll see that one end of the EQ line now dives towards, or off, the bottom of the graph. A high‑pass filter (also known as a low‑cut filter) progressively reduces the amplitude of frequencies below a fixed point. A low‑pass filter (high‑cut) does the same with frequencies above a fixed point. The fixed point in each case is known as the cutoff or turnover frequency.
The two key parameters that can be modified in a filter are the cutoff frequency and the slope. Referring back again to an EQ with a spectrogram display, the latter is a very literally named parameter! The higher the slope setting, the more abruptly frequencies beyond the cutoff are reduced. The value of the slope is usually expressed in decibels per octave (dB/oct) and for reasons I won’t go into here, the options available are usually multiples of 6dB/oct. The steepest slope available in analogue designs is usually 24dB/oct, though steeper filters can be constructed.
Most mixers and EQs have high‑pass filters on every channel. Quite often, these are entirely preset affairs, with the only control available being an on/off switch. It’s cheaper for a manufacturer to build fixed settings into the design than to make them variable. However, this doesn’t mean all fixed filters are the same, so it’s worth consulting the manual for your mixer to find out what exactly your particular filters do.
Despite its simple nature, the high‑pass filter is arguably the only truly indispensable type of equaliser, and certainly the most important for corrective equalisation. Broadly speaking, it can be used for two functions. One is to remove unwanted low‑frequency noise whilst leaving the wanted signal intact. The other is to correct the balance of a recording where the low end is exaggerated. A typical example of the first situation might be to deal with a recording of an acoustic guitar where the player is tapping his or her foot. The fundamental frequency of the lowest note played on the acoustic guitar might be 80Hz or so, whereas most of the sound picked up from the foot‑taps will be at lower frequencies. In theory, by setting a high‑pass filter with a cutoff frequency at 80Hz and a steep (18 or 24 dB/octave) slope, we can reduce the unwanted sound without affecting the guitar sound at all.
By contrast, if the guitar has been recorded using a directional mic placed too close to the body, we may have a different corrective problem on our hands. Thanks to the proximity effect, that 80Hz fundamental and those of other low notes will be exaggerated, making the overall sound boomy and muddy. In this case, we want to configure our high‑pass filter with a much more gentle slope and have it turn over at a higher frequency — a 6dB/octave slope from 200Hz might not be a bad starting point.
Low‑pass filters are less commonly used in mixing, but they do have their place. For example, most guitar amp cabinets don’t produce a great deal of sound above 6kHz or so. So if a cabinet is miked up next to a drum kit, most of what is captured above 6kHz will be spill from the cymbals and snare; if this sounds bad, it can be removed using a low‑pass filter without affecting the guitar sound. There are also percussion instruments such as tambourines and shakers that generally belong in supporting roles in a mix, but are possessed of ferocious amounts of high‑frequency jangle. A low‑pass filter can help them to be audible in the mix without dominating it.
Most equalisers have just a single high‑pass filter per channel. And likewise, most have a single low and high band of shelving equalisation per channel. Often this is provided in addition to the filters, sometimes as a switchable alternative. Create a shelving band on an EQ with a spectrogram and you’ll immediately see where it gets its name: it offsets the EQ line by a fixed amount above or below its turnover frequency. Usually, you’ll be able to choose this frequency, and you’ll also be able to specify a level of gain in decibels. This sets the extent to which the shelf is offset from the flat part of the curve, with 0dB being off or inactive.
Sometimes, you’ll have the option to determine how abruptly the EQ line moves from its flat position to the shelving level. This setting is sometimes called the ‘slope’ but, borrowing a term from the next type of EQ we’ll encounter, can also be known as the bandwidth or Q. Where this is isn’t adjustable, it’s often set at a gentle 6dB/octave.
Shelving bands have both corrective and creative uses. A low‑frequency shelving cut can be another good way to tackle excessive boominess caused by proximity effect, whilst a low shelving boost can do the opposite, beefing up the low end of an anaemic recording. And although you wouldn’t often use two low‑frequency shelves on the same source, there are situations where combining a low shelf and a high‑pass filter can be effective. For example, a recording of a bass instrument might have excessive sub‑bass, yet be inaudible on small speakers because it lacks weight in the low midrange. We could address this by applying both a low shelving boost from, say, 300Hz down, and a high‑pass filter turning over at 60 or 80 Hz. This would reduce the amplitude of the low‑frequency fundamentals and raise that of the harmonics, changing the sound of the instrument and helping to create a mix that ‘translates’ across every type of listening system.
Shelving equalisation is particularly good for subtly altering the overall tone of a source, or even of an entire mix. With a gentle bandwidth setting, the shelving bands on a good EQ will sound extremely transparent. The most common use for a shelving boost is to subtly add air and presence at the top end. In fact, it’s probably fair to say that the bright sound of modern pop and rock mixes is hard to achieve without using shelving boosts over many of the individual sources, or the mix as a whole.
It’s worth experimenting with a very wide range of turnover frequencies when you do this. You might find that all you want is the hint of sheen added by a boost from 10 or 15 kHz upwards — but often it can be beneficial to use a shelf that extends right down into the midrange, perhaps even as far as 1.5 or 1 kHz. One of the most common faults in mixes by inexperienced engineers is a failure to fill out this midrange area enough. This part of the frequency spectrum really is the ‘engine room’ of any mix. It’s the area where key signals such as vocals mostly lie, and it’s the one part of the mix that is guaranteed to be heard on any playback system, no matter how bad. It is therefore vital to make the best possible use of it, and even if you don’t retain it in the final mix, experimenting with a broad shelving boost on the master bus can be a great way to find out which sources can be pushed harder, and how far.
The third common type of EQ band is the bell or parametric band. Unlike filters or shelves, parametric bands aren’t intrinsically ‘low’ or ‘high’, and the frequency setting doesn’t mark the outer limit of the band’s action but its centre. Assuming the design of the EQ allows it, they can be positioned anywhere within the frequency spectrum. And, as is also obvious from any EQ graph, the name ‘bell’ refers not to the sound but to the shape of the curve that is created. The action of a parametric band is greatest at the centre position determined by the frequency control. Either side of this, it curves back to the flat, forming a symmetrical hump or dip.
The bandwidth or Q control encountered with many shelving EQs really comes into its own with parametric bands. As the name suggests, this governs the width of the hump or dip. Where Q (short for ‘quality factor’) is given a numeric value, it can seem a bit counter‑intuitive, because a low Q value indicates a wide bandwidth, and a high value a narrow one.
Bandwidth is expressed in octaves, and this is often the most helpful way to think about it in a musical context. It’s important to remember that the relationship between frequency and musical pitch is logarithmic rather than linear. An interval of an octave represents a doubling of frequency. An EQ band that spans 100Hz to 200Hz thus has a bandwidth of one octave; but so does a band that spans 400Hz to 800Hz, or 3kHz to 6kHz. The same goes for the relationship between the centre frequency of the band and those of its outer limits. EQ bands are symmetrical on the same logarithmic scale, so for example, a two‑octave parametric boost centred at 100Hz will have its lower and upper limits at 50Hz and 200Hz respectively.
Incidentally, not all equalisers that offer bell bands offer control over the bandwidth. Typically, these semi‑parametric designs have what’s called a ‘proportional Q’ response. What this means is that the bandwidth narrows as the extent of cut or boost increases. This might sound like a limitation, but in practice it is usually sympathetic to our needs, and makes the gain control more useful over a wider range.
The best way to understand how the frequency, gain and bandwidth controls interact is to experiment on a modern software plug‑in with a helpful graphical interface. It will quickly become apparent that even a single fully parametric EQ band is a hugely powerful and versatile tool, and that an EQ with multiple bands is capable of drastically reshaping any source. So how can we turn all this power to our advantage when mixing?
Once again, parametric EQ has applications that could be described as corrective or creative. One almost universal piece of guidance to bear in mind is that high Q (narrow bandwidth) settings fall into the former category, and that they are much more useful for cutting than boosting. Some of the most common sources of unwanted noise in recordings actually manifest themselves as sine waves, or at least as fairly simple continuous waveforms with relatively few harmonics. These can often be effectively eliminated by using narrow‑band parametric EQ cuts. Examples include feedback in live recordings, 50 or 60 Hz mains hum, the high‑pitched whine of old cathode‑ray TVs and monitors, and so on.
Problems associated with the instruments themselves sometimes show up in similar fashion. It’s quite common for a poorly tuned or damped snare drum to have a noticeable and unpleasant ‘ring’. Watch the spectrogram and you’ll see narrow peaks in two or three very specific frequency areas. With a parametric EQ on the snare track or the drum bus, we can apply narrow, sharp cuts centred on the offending frequencies and reduce the ringing without otherwise changing the sound of the drum. A badly set up bass guitar or a cheap amplifier can give rise to similar problems, whereby one note seems to jump out compared with the others. Recordings of the human voice, too, can exaggerate unwanted honks and resonances in the midrange, and sympathetic narrow‑band EQ cuts can often minimise these without hugely affecting the overall vocal tone.
Where a problem really is confined to a single frequency, it may even be possible to use what’s called a notch EQ. This can be thought of as an approximation to a parametric band that has a bandwidth of zero and an infinite negative gain, and is useful in dealing with problems like mains hum. A 50Hz mains hum will typically manifest itself as sine waves at 50Hz, 100Hz, 150Hz and 200Hz; notching out each of these separately will have minimal impact on the overall sound, whereas using a high‑pass filter or a single parametric band to address all of these frequencies will lose all the wanted bass content in the signal.
With musical signals, a narrow bandwidth means the EQ affects some notes more than others. For example, if you tightly focus an EQ cut at 220Hz, you’ll reduce the fundamental frequency of the A below middle C. The C itself will be less affected, whilst the A an octave down will retain its fundamental frequency unchanged, but its first harmonic will be reduced in level. This can be useful if, for example, an instrument has a pronounced resonance on one or two notes, but also brings obvious potential for things to go wrong. If you often find yourself making narrowish cuts at the same frequency, on multiple instruments, it might be worth checking whether the fault lies in your monitoring rather than the recordings themselves!
Broader parametric EQ cuts have numerous uses. For example, guitar amps can often sound a bit shrill or harsh in the upper midrange, and shaving a few dB off somewhere in the 2‑4 kHz area can ameliorate this. And although ribbon mics can make fine drum overheads and room mics, in my experience they are frequently a bit too present in the 1.5kHz region and benefit from a slight cut thereabouts. Close mics on bass drums and toms quite often sound ‘cardboard‑y’ on their own, and you can sometimes help this by scooping out the lower midrange with a cut somewhere around the 500‑700 Hz area. Acoustic guitars recorded with pickups rather than mics tend to sound very unnatural, and here too the midrange can be overpowering.
A handy but not infallible trick for identifying the best frequency region to cut is to temporarily set the EQ band to a fairly hefty boost instead. You can then sweep the frequency control up and down to find the point where the problem you’re hearing gets worse. Be warned, though, that absolutely any frequency region tends to sound bad when you boost it indiscriminately, and some self‑discipline is required to avoid creating more and more bands to deal with things that aren’t actually a problem.
The distinction between ‘corrective’ and ‘creative’ equalisation is a very fuzzy one, not that that’s something to worry about. Making cuts in the lower midrange across multiple sources is often helpful when it comes to getting them all to work together at the mix; you’re not so much improving the sound of individual instruments as crafting a compromise arrangement that benefits them all. Likewise, cutting some upper midrange from an electric guitar sound might actually make it less impressive in isolation, but from the point of view of making the mix work, perhaps that is what is necessary to help it sit behind the vocal rather than fighting for the same space.
The idea that elements within the mix should not overlap too much in terms of occupying the same part of the frequency spectrum has been the focus of much mix advice over the last few years, and there are even automated AI‑based plug‑ins that will analyse individual instruments in your mix and tell you where overlaps are occurring. When these overlaps are problematic, the result is a phenomenon known as ‘masking’, whereby elements obscure one another. If this happens, it can be helpful to apply EQ cuts to different areas in these sources to reduce the degree of overlap. In the example above, we might cut 2kHz from the electric guitar and 400Hz from the vocal to help them fit together better.
However, it’s important not to be led into the unconscious inference that frequency overlaps are always bad. Eliminating them can help us achieve clarity at the mix, but clarity itself is only a means to an end. The pursuit of clarity for its own sake often leads to sterile, thin‑sounding mixes. It also closes off creative possibilities, such as layering multiple instruments with the specific goal of having them be perceived as a single sound. Phil Spector’s ‘wall of sound’ was anything but clear, but it was mighty effective!
Novice mix engineers are sometimes advised never to use EQ to boost, only to cut. Personally, I think this advice is outdated. It was perhaps a good rule of thumb when people were mixing on cheap analogue consoles that lacked internal headroom, because EQ boosts on those mixers often sounded strained or introduced distortion. But a modern DAW has basically unlimited headroom, and plug‑in EQ can be as clean as you like even when applying large EQ boosts.
However, one reason for being careful about EQ boosts is still as valid as it ever was. By definition, an EQ boost makes the source louder, and often brighter too — and that in and of itself tends to make things sound more impressive. So when you do use EQ to boost, it’s a good idea to use either the channel fader or the output gain control on the EQ plug‑in to try to match the apparent level to how it was before. That way, you can make an informed judgement as to whether the EQ really is doing something useful, or whether you should just have pushed the fader up in the first place.
Novice mix engineers are sometimes advised never to use EQ to boost, only to cut. Personally, I think this advice is outdated.
We’ve already looked at the possible benefits of boosting with a high shelving EQ across the master bus or many of the sources feeding it. One or two parametric bands can often be put to good use in the same role, the key being to keep the Q value and gain both relatively low. In fact, I’ll often combine a shelving and a bell boost on the master bus. For example, a shelf could be used to lift everything from 2kHz up by a couple of dB, whilst a parametric band adds an extra focus on the 5kHz or 10kHz region. But in this context above all it’s vital to be cautious about perceiving louder and brighter as better. It’s also an extremely good idea to A/B your mix with known reference material at the same level.
On individual sources, one way to think of EQ boosts is as a more focused alternative to pushing up the fader when you want something to be more audible. Let’s say, for example, that your vocal is being overshadowed by guitars, keyboards or other instruments. Pushing up the fader makes every frequency in the vocal louder; this might achieve what you want, but in a busy mix, you might achieve just as much cut‑through with a gentle boost at 1.5 or 2 kHz, without unnecessarily emphasising aspects of the vocal that are already prominent enough. Likewise, if you want a ‘subby’ kick drum sound to be audible on small speakers, simply raising the level can be problematic, because you’ll be bringing up the low end to the point at which it’s overwhelming on large speakers. Better to use EQ to boost 100 or 150 Hz. The same applies to other bass instruments. For example, a gentle boost somewhere in the 400 to 800 Hz region can help a double bass to sound more articulate and present in the mix, without also bringing up the woofy low end that will throw things out of balance.
Whether you are boosting or cutting, you’ll sometimes end up with EQ settings that look wrong, or which are the exact opposite of the settings you’ve used previously on the same instrument, or which violate some Internet ‘rule’. No matter. If it sounds right, it is right. As with so many other aspects of mixing in the digital age, though, the easiest way to mess up with an equaliser is to overcomplicate things. You should never be afraid to use EQ if it helps you to achieve the results you want — but it’s seldom a good idea to use EQ when you don’t know what results you want! EQ is an invaluable tool, but you may well find that the better you get at mixing, the less EQ you actually do...
Some mixing rules are honoured more in the breach than in the observance, and none more so than the oft‑quoted dictum that you should never adjust an EQ while listening to something in solo. I don’t know a single engineer who would not cheerfully admit to doing this all the time! Even so, it expresses a valid point, which is that the aim of EQ’ing things within a mix is not to make each individual source sound as good as possible. Rather, the goal is to make the mix as a whole sound as good as possible — and reaching that goal often means equalising individual sources differently from how you would if there was nothing else playing at the same time. In particular, a satisfying overall mix balance is sometimes achieved only by cutting quite a bit of low midrange from individual sources, which consequently sound rather thin when you hear them in solo.
One of the most common questions asked by novice mix engineers concerns the order in which processing should be applied. Should you place your EQ plug‑in before your compressor in the signal chain? Or after?
The answer is: It depends.
The usual reason why this dilemma arises is that people haven’t paused to consider two other questions: why does this source actually require EQ or compression at all? And what am I hoping to achieve by using it? Having clear answers here will often suggest the most appropriate order, if indeed both are really needed.
For example, let’s suppose we decide that a vocal needs some dynamic control from a fast‑acting compressor, as well as a 100Hz high‑pass filter to cut out plosives and other low‑frequency noise, and a 1.5kHz EQ boost to help the upper midrange cut through the mix. If we place this EQ before the compressor, it will also shape the side‑chain signal that triggers gain reduction. Compared with placing it after the compressor, we’re thus more likely to get gain reduction triggered by peaks in the 1.5kHz range, whereas plosive pops are less likely to trigger compression.
Since the aim of the compression is to achieve a subjectively more consistent vocal level, my instinct in this case would be to put the EQ before the compressor. The point of the high‑pass filter is to remove the plosive pops altogether: if we allow them to trigger compression first, we will in effect hear the compressor acting on sounds that aren’t there!
I started this article with the great insight of the French mathematician Charles Fourier: that any continuous repeating waveform can be broken down into a number of sine waves at different frequencies and amplitudes. Equalisation is a process that changes the relative amplitudes of these sine waves — but that’s not all it does. Most EQ designs, including all analogue EQs, also alter the phase relationship between them. If, for instance, we boost at 10kHz, we aren’t just making sine waves in that region louder compared with those at, say, 1kHz. We’re also delaying them slightly.
In many circumstances this is a non‑problem. Our ears are not very sensitive to phase shift, and EQ is not the only thing in the signal chain that can cause it. In fact, directional microphones distort the phase of the sound they’re capturing, so we often aren’t starting with a faithful representation of the original phase relationship anyway. However, there are occasions where it can be an issue. The amount of phase shift generated is related to the steepness of the EQ slope, so we are most likely to hear it when we apply steep filters to the entire mix. This isn’t something you’d do for its own sake, but it’s necessary for certain applications. For example, most studio monitors use at least two separate drivers to handle different frequency ranges, and this means using steep filters to ensure that only the low frequencies reach the woofer, while the tweeter receives only the high frequencies. This is one reason why sealed‑box, single‑driver speakers such as the Auratone are so valuable: although they can’t reproduce the entire frequency spectrum, they also don’t introduce crossover‑related anomalies in the midrange.
In mixing, phase shift is often apparent with multiband dynamics plug‑ins. These use steep filters to divide the frequency spectrum up into two or more bands, which can then be compressed independently. Simply inserting one of these plug‑ins on the master bus can cause the sound to change noticeably even before you do any compression, as a result of phase shift caused by the filters. Phase shift caused by EQ can also be apparent when the same source is captured on multiple microphones: your bass drum might no longer cohere quite so well with the overheads once you start using EQ, even if the EQ improves the sound of the close mic in isolation.
It’s also worth remembering that Fourier’s insight strictly applies only to continuous, unchanging waveforms. In practice, any sound that is sustained enough to have a timbre or tonality at all can probably be considered a reasonable approximation to a continuous waveform, but many real‑world sounds also contain transient elements. This goes for almost all drum and percussion sounds, as well as the initial note onset in many plucked or hammered instruments. These, by their definition, are momentary rather than continuous, and don’t last long enough to have a tonal character as such. However, phase shift caused by EQ can undermine their impact and rob them of punch.
In a digital system such as a DAW plug‑in, it’s possible to implement what is called a linear phase EQ. As the name suggests, this makes possible equalisation without phase shift. However, there’s a down side, and again, it’s one that primarily affects transients. Instead of ‘smearing’ caused by phase shift, linear phase EQ causes a different phenomenon known as pre‑ringing, producing artefacts that are heard before the transient itself.
Among the many advantages that modern digital mixing environments have over the old‑school recording studio are the availability of unlimited tracks, and the fact that almost everything can be automated. Both of these are relevant to the practical use of EQ, especially in a corrective context. Very often, problems that require correction using EQ aren’t entirely consistent throughout a recording. For example, an over‑excited singer may move around too much in front of the mic: when they get too close, plosive pops and proximity boost are apparent, but when they go off‑axis to the side, the sound will become darker at the top end. The sort of EQ cuts that are needed to deal with the former problems when they occur will actually be detrimental to the sound elsewhere in the track.
There are two main ways of dealing with this. One is to automate the gain (and perhaps other parameters) on some EQ bands, so that they act only when needed. The other is to split different sections of the part to different tracks within your DAW, and use different plug‑in EQ settings on each. Which you use is a matter of personal taste as much as anything; personally, I tend to the latter approach unless there’s a need to do something very complicated such as have the frequency of an EQ band track notes within the part.