Recording a library of acoustic guitar samples highlighted the importance of repeatability, blind testing, small lasers and double-sided sticky tape.
My colleague Daniel Scholz and I started developing sample libraries about eight years ago. In the beginning, Daniel wanted to have a drum sample library that could be triggered using MIDI and deliver each microphone on an individual track to allow maximum flexibility when mixing, and this work ultimately led to the release of our Drumasonic drum sample library series (www.drumasonic.com). Since then, our philosophy has been to create sample libraries which are as true to the acoustic instrument as they can be, and which use as little post-processing as possible to achieve very ‘honest’ recordings.
Looking for a marketing and sales partner, we approached Native Instruments, but as they already had their own drum sample library series, we switched to guitar sample libraries instead. Looking back on three co-operations (Strummed Acoustic, Strummed Acoustic 2 and Electric Sunburst), it can be said that joining forces probably raised the quality of the end result, bringing together NI’s expertise in creating user-friendly products and GUIs and our dedication to details regarding sound quality and the authenticity of the emulation. Another key factor was the very committed and persistent guitarists who lent patterns, in-depth knowledge of their instruments and their musical gut feeling in extensive listening tests.
There are really no shortcuts to signal quality, so for every project we combine an artistic approach with science: we record as many options as possible and carry out extensive double-blind test sessions to determine all the factors that contribute to the best sound possible. This includes the selection of instrument, room, cables, preamps, converters and mics, and microphone placement. Now that we’ve specialised in guitars, we also compare different sets of strings and plectrums, and even factors such as temperature and humidity. The goal is to have a setup that is completely controlled and can be fully recalled whenever we record new content, in order to achieve the best consistency possible. Under those conditions, it is remarkable how obviously even the slightest change anywhere in the system affects the sound, be it for better or worse.
This meticulous approach leads us to interesting discoveries which wouldn’t occur during a conventional recording session. A band are rarely patient enough to wait for a guitarist to do test recordings in 15 different positions in the room, or to try seven instrument cables through 15 different DI boxes and/or preamps! Sometimes we’d find out things by accident; for example, who would have thought that many instruments have an individual sweet spot regarding the exact temperature and humidity? There seems to be always a particular combination of temperature, humidity, mic position, set of strings, plectrum and so on where everything falls into place and the signal is just so good that there’s simply no point in trying to optimise anything any more. When we try different EQ and compression settings and realise in the end that the pure signal is still better than any modification thereof, we know we’re there.
When we’re evaluating a setup, there are four factors we consider. The first of these is ‘spectral evenness’: the instrument should sound consistent over its complete playable pitch and dynamic range. The second is solid low end, which relates to the first factor. I’d like to quote the great sound engineer Michael Stavrou, who writes in his book Mixing With Your Mind: “If the bass is right, the rest will fall in place.” When playing a chromatic scale in the lowest octave of the instrument, the bass of each individual note should be equally thick and solid. If some notes are really boomy and others sound more thin or bright, that’s a bad sign. It means that phase cancellation is affecting the root frequency of that note to a certain degree. Other overtones can still happen to be in phase, which leads to an uneven tone that always seems to have audible resonances in the overtone structure while being less punchy.
The solid low end of each instrument in a mix contributes greatly to the groove of a production, as only a note with a solid, in-phase low end seems to be able to ‘push’ the listener and make them want to move to the music. A solid low end is like pushing someone while a bad low end is more like slapping someone with a flat hand: it tends to hurt while not transferring the same physical energy as a solid kick in the butt!
The third factor is solid and well-balanced transients. The attacks of each individual note are probably the most important bits of a recording. If the transients are too strong in relation to the decay, the instrument will need some kind of compression and, depending on the frequency spectrum of the sound, corrective equalisation, to sound acceptable. If the transients are too weak, it is harder locate the instrument in a dense mix. In either case it will be hard to find the right volume for the instrument in a mix, as a part of the instrument will always seem to be either too loud or not loud enough. If you put the mic directly in front of the soundhole, many guitars deliver transients in the low end that are too strong, while pointing the mic exactly at the place where the plectrum hits the strings will deliver transients with too much high-frequency content, especially with newer strings.
As a last resort, these things can be fixed to some degree with multiband compression, but nobody wants that! We want a sound that is so well-balanced that we don’t feel the desire to fix it with effects processors. So, while moving the mic around we’d look for positions with a solid, consistent low end rather than an aggressive, ‘slap-in-your-face’ sound. The loudest spot is not necessarily the best, but a solid sustain for each note in the lowest register is a very good sign.
The final factor is the room sound. When you’re recording acoustic guitars for a specific production, it can be appropriate to capture quite a lot of room reflections, but for sample libraries it makes sense to record a rather dry signal, as this provides the most flexibility for users to add reverb afterwards. There is a significant psychoacoustic effect at work here, which I call the ‘room within a room’ effect. Just as we register the size and shape of a room when looking at it with our eyes, our ears also provide us with information about a room’s size, what materials it’s made of and what objects are in it. If you stand in the middle of an empty gymnasium and clap your hands once, you’ll hear first the direct sound of the clap, followed by a reflection from the floor. Your brain doesn’t really care about this floor reflection, as it is so used to you standing on a floor. Then there’s a moment of silence while the sound travels to the walls, hits them, gets reflected and travels back to you, before you hear the first reflections from the walls. From then on, the sound gets reflected back and forth between the different walls, floor and ceiling, until it dies away. The most interesting bit for your brain is the moment when the first reflections come back from the wall. The time between your initial clap and the first reflections reveals to your brain how big the room is, while the tone of the reverberation suggests what materials the walls are made of, and so on.
If you decide to put artificial reverb on such a recording, you can choose a type of reverb which returns early reflections earlier than the ones in the original recording. Adding enough of such a reverb will eventually lead to the perception of a smaller room. However, if your reverb has a longer pre-delay than the original room, it will still be the early reflections already present in your recording that strike the ear first. This is not necessarily a problem in a big room, but when recording in a small, not so pleasant-sounding, room you’d better be careful to stop those nasty early reflections of your small room making it to the recording medium. Once they’re on there, there’s nothing you can do to make your recording sound bigger than it was.
To create a big sound in a small room, then, it’s necessary to use wide-band absorbers to reduce the amount of early reflections on the recording, as these will give away the size of the room. With a dry recording, you can then use artificial reverb to convincingly recreate a different acoustic space. If the raw recording is too wet, adding further reverb will create the perception of a ‘small room within a large room’, which our brain will most likely consider muddy or not transparent. A drier signal needs less reverb to sound ‘big’ than a signal with early reflections from a small room, because you need to turn up the artificial reverb a lot more to drown the ‘bad’ small-room reflections.
If you don’t have a room that’s properly acoustically treated, a couple of Basotect panels can improve things quite significantly. While diffusers do not absorb the sound, they scatter the sound into so many different directions that the individual early reflections coming back from the diffuser are too weak to be psychoacoustically relevant. So, depending on the needs of the performing musicians and desired acoustic result, diffusion can also contribute to avoid the ‘room in a room’ effect without creating an overly dead-sounding recording.
Another benefit of controlled room acoustics is the added flexibility you gain in terms of mic placement. Normally, the further away the microphone is from the instrument, the more early reflections you will hear in relation to the direct sound from the instrument. A drier room sound allows for more distant mic positions without the unwanted effects of overly strong early reflections. A lot of times this is beneficial, as it is normally easier to find a well-balanced sounding spot a couple of inches further away from the guitar rather than in close proximity. Again, this isn’t a hard and fast rule, and as you can see from the pictures, some of the mics we used were placed very close to the instrument.
Each acoustic guitar resonates in its own way, and the complex, three-dimensional movement of the strings and the body of the guitar creates an incredibly complex and unpredictable sound field. That means that each position around the guitar will have something like a unique sonic ‘fingerprint’, consisting of individual volume progressions over time for each frequency band. In some spots, a certain overtone might ring out more audibly than others, in another spot, the low frequencies might be very loud in the beginning of a note but way too soft in the decay of, say, the F and F# of the lowest octave, and so on.
In recording the instrument, we are seeking the exact point in the complex three-dimensional sound field around the instrument in which the membrane of a selected microphone will capture a sound that fulfils all four of the factors described above. This sounds complicated and challenging, because it is! Anyone who claims to ‘know’ the best position for a microphone in front of a guitar, without listening to how a particular spot sounds with that particular instrument in that particular position in that particular room, and so on, is oversimplifying things. Simply putting the mic where it looks right won’t to do the trick. That doesn’t mean that a beautiful musical piece cannot be recorded with mics in a less-than-perfect position, but when looking for the ‘best’ spot, there’s no substitute for placing the mic itself at as many different positions as possible and listening through closed headphones on only one ear (see box) to find the best spot.
If we manage to record a sound that meets all of the criteria described above, the result will be that the amount of perceived ‘energy’ coming from an instrument is maximised no matter how loud the performer plays. That means that even a very soft performance will still feel powerful and well defined. In addition, if EQ or compression should be applied for artistic reasons, we will have the feeling that our EQs and compressors are much better than they were before, simply because the solidity of the signal will allow the compressor to react in a much more predictable way, and the spectral balance of the instrument will provide something interesting in every frequency band for the EQ to shine its light on.
There’s also the matter of the stereo field: if we find two great-sounding spots and listen to those two signals hard-panned on speakers, there will be a resulting stereo image. The conclusion we drew from our listening tests is that the sound of the individual mic position is much more important than adhering to conventions about stereo miking, such as sticking to a predefined distance or mutual angle between the mics.
Although the most important thing for our purposes is finding individual positions where a single mic sounds great, the two main factors that affect the stereo image still apply. First is the correlation between the two signals. If both signals are essentially the same, the stereo image will be very narrow. You can try this yourself by placing two microphones next to each other; if you then keep the capsules in the same position but point them in different directions, you will hear that the high frequencies start to widen first, as the low end of many cardioid mics is less directional than the high frequencies. When you move the mics to different spots, the signals will differ more, and the resulting image will be even wider. As a rule of thumb, the greater the distance between the mics, the wider the stereo image. Second, different mic positions can introduce a time delay between the left and the right channel. If you move one microphone further away from the instrument than the other, the sound will take more time to reach the mic that’s further away — and you’ll also have to compensate for increasing distance by adjusting the gain of your mic preamp. As the two channels are not playing exactly in in sync any more, the time delay between the mics will widen the stereo image.
While mono summing is a very useful tool to sensitise our ears to the effect of different degrees of signal correlation, every decent playback device nowadays offers stereo playback, so sacrificing sound quality for mono compatibility has become much less important than in the old days. However, stereo is not always better. Depending on the musical context, the transparency of the final mix might benefit from using one mono mic only. As we should have two great-sounding individual signals by now, we always have the option to discard one of them.
As desribed above, the sound field around the guitar is so complex that the exact ‘best’ mic position needs to be very precise in relation to that of the guitar. It’s easy enough to ensure that the mic doesn’t move relative to the guitar, but how do you prevent the opposite happening? As you can see in the pictures, we built a custom rack in which the guitar is suspended (this is shown in the opening photo in this article). This is done in such a way that the instrument is still comfortable for the guitarist to play, but barely moves, ensuring a sound that is much more consistent than in a regular guitar recording. Note that the guitar is suspended with as little physical contact to the rack as possible, as contact with the vibrating parts of the guitar stops it from resonating freely.
To find and recall the ideal mic positions, we used Photo Booth on my MacBook. We moved the mic around within an imaginary vertical surface above a horizontal line at a fixed distance from the guitar, while the guitarist repeated a simple four-chord pattern. By recording the mic movement with the camera and joining the video recording with the audio recording in Cubase, we were able to directly compare many different mic positions while being able to approximately retrieve them by putting the mic back to the position where an imagined straight line between the camera — which would not be moved in the meantime — would intersect with the imaginary vertical surface. The possibility to compare and retrieve mic positions this way turned out to be a major factor in the quality of the final recording.
Even the slightest alterations in mic placement make a difference, especially in the perception of stereo imaging, where deviations of a few millimeters are audible. So it was crucial to fix the final positions as precisely as possible. To do this, I built small cubes made of epoxy resin with three built-in laser pointers (you can get them very cheaply online) pointing along the X, Y and Z axes. The cubes were then glued to hose clamps — a very nerve-racking task as the resin heats up when curing, which, in turn, destroys the heat-sensitive lasers! It took many attempts, and involved cooling the resin cubes with ice cubes, but in the end it worked.
I then attached these resin cubes to the mics, the guitar and the guitar stand, and marked the points where the lasers would hit the floor, walls and ceiling. This way it was possible to document the mic positions exactly, and to perfectly recreate a setup at a later date.
Each step was accompanied by extensive test recordings and blind listening tests to make sure that all decisions were made solely for sonic reasons. Of course, we also compared a lot of different microphones, but in the end, a good mic in a bad position sounds way worse than a bad mic in a good position. Also, surprisingly, the perceived sonic quality was not at all related to the price: there were some expensive mics that sounded great, but there were other, even more expensive mics that were discarded after the first round of our listening tests. Some of our favourites were vintage Gefell UM70/MV692, vintage Strässer/Schoeps CM060, Neumann KM84, Royer Labs R121, the comparably dark-sounding AEA R84 ribbon mic and the rather bright-sounding Avantone CV‑12.
It made sense to complement the character of the instrument by using a microphone of opposing characteristic, so, for example, if we found a spot that sounded great but a little too harsh, we’d try a ‘softer’ microphone such as a ribbon. If the sound was too full, I’d reach for something like the Schoeps CM060 or Avantone CV‑12, which are both a little lighter in the low frequencies. With some older microphones, variation between different examples of the same type can be quite significant, and I usually label my mics with little hand-drawn pictograms of the characteristic frequency response and use these ‘imperfections’ to the advantage of the recording.
I hardly ever used the same mic for the left and the right channel, especially as the possibilities for positioning an instrument in the mix are so much greater when working with two different mics in different spots with different sound characteristics.
Many guitarists will be familiar with the problem of new guitar strings sounding too bright in the studio. Conversely, when they’ve been used for several hours, there will be the inevitable moment where the strings get tired and need to be replaced. We experimented with many different kinds of strings and all kinds of modifications to solve that problem, and we found that small snippets of double-sided tape, carefully attached near the bridge of the guitar, can be used to simulate the ageing of the string. This can also make the high ‘E’ string sound a little more dull, round off its sound and make it ‘fit in’ with the other strings better.
In general, it makes a lot of sense to watch out for components that are too bright when recording: when these are modified before the recording, the overall recording can be subsequently brightened up with EQ without sounding harsh. Other typical examples of such modifications are sticking little bits of tape to the sides of a hi-hat, using softer mallets for vibraphones and shielding trumpets in an orchestra recording session. All these ideas were to avoid generating high-frequency content that ‘sticks out’ and makes finding a good balance a pain afterwards. We tried many different tapes, glues and modelling materials, and the only one that didn’t cause buzzing was the double-sided carpet tape made by Tesa.
Recording sample libraries is painstaking work, but what I really like about it is that, despite the fact that we’re using state-of-the-art equipment, it becomes obvious that great sound quality isn’t something you can buy. Rather, it’s taught us the value of focusing on the right details, educating ourselves in acoustics and psychoacoustics and using our ears and brains to create inspired and musically superior results. Whether you use expensive or cheap microphones, and whether you use expensive absorbers to control the room acoustics or you hang a bunch of blankets over a couple of chair backs, there are always ways to improve the quality of your recordings that do not require you to spend a lot of money.
It is not uncommon that the two best positions from which to mic an acoustic guitar turn out to be at different distances from the instrument. Differences in distance tend to make the stereo image wider: if we don’t want this to happen, we can delay the signal from the mic that is closer to the instrument to compensate for the time difference. Alternatively, we could increase the perceived width further by delaying the mic that is further away.
There are circumstances where either decision might be appropriate. A wide stereo image makes a lot of sense in situations where there’s a single guitar accompanying another performer, for example a solo vocalist: the vocalist will most likely be in the centre of the stereo image, while the guitar will ‘wrap up’ or ‘surround’ the vocal. If the guitar is the soloist, however, it might make more sense to compensate the delay in order to create a more solid centre image.
To carry out our time adjustments we used the free Voxengo Sound Delay plug-in (www.voxengo.com/product/sounddelay), which provides the option to work in ‘dual mono’ mode, making it possible to delay each side of a stereo recording independently in very fine increments. A/B’ing is a great way to find the best setting: saving several variants and skipping through them allows you to quickly compare different options. One thing to keep in mind is that time differences can greatly affect the way a stereo recording sounds when summed to mono, so listening to the differently delayed versions in mono can also be revealing. To hear the resulting coloration more clearly, it is advisable to play back the mono downmix on a single speaker. Otherwise, you hear two speakers playing back two summed mono signals each, and that introduces phase issues related to the summing of the speakers, which will make it harder to hear the effect of the summing of the two mics. Steinberg’s Mix 6 to 2 plug-in provides all the possibilities you need, but any mono summing tool followed by a balance control, which moves the result to one speaker, delivers the same result.
Finding the best spot to position a microphone is hard because our ears are very used to listening to music from less-than-perfect positions, and our brain works as an incredibly sophisticated ‘post-processor’ for everything we hear. Even if there’s too much reverb, or strong room modes, or the sound is very unbalanced, the processor in our brain evens out the frequency balance without us noticing. Listening live, we think that an instrument sounds great, but as soon as we listen to the electroacoustically reproduced version of the ‘great sound’ (for instance through speakers in a different room), our brain no longer ‘auto-corrects’ it, and we realise that the recorded sound is far from well-balanced and needs all kinds of processing to fix it.
Luckily, there’s a simple trick that can stop our brain from trying to ‘post-process’ what we hear in the room in order to be more objective when it comes to finding the best spot to place a mic. If you close one ear with a finger, so you hear only with the other ear, it seems to be much harder for our brain to correct the frequency balance. What you hear sounds much worse than listening with both ears, but it sounds more like what the microphone will hear, so that’s a good thing. Stick your finger into one ear and move your head around the instrument while the guitarist plays the exact same pattern, at the exact same level, over and over (they need to be patient) and look for all the desirable characteristics I’ve described. This is perhaps the most important step to achieve what I’d consider being a ‘good’ acoustic recording.
Tuning is always both a challenge and a compromise when recording guitars. When you’re recording a sample library, less compromise is acceptable, and every single chord needs to be as tuneful as possible.
We wanted the playback engine to be able to create a large variety of different chord types, so rather than record all the different voicings and extensions separately, we recorded basic chord voicings such as fifths in two different registers, then programmed the playback engine to add sequenced single notes to realise thirds, sevenths, ninths and so on. To provide the playback engine with the necessary information about which single note has to be sequenced at which exact moment in time, with which velocity and with which articulation, we developed proprietary tools which extracted the required information from the original recordings using complex measurements, and saved the required metadata about each individual strum in a large database.
As the recorded types of chords were very different from what’s normally being recorded in a regular recording session, we ended up tuning each chord individually and having the musicians performing multiple patterns on a single chord in order to save tuning time. We tuned the root note of each chord to a reference tone, piled up the fifths and octaves on top of that and tuned them individually by ear. Tuning each of these voicings by ear was perhaps the hardest part of the job — we spent almost two hours a day tuning the guitar. Multiplied by the number of days we spent recording the sample content, the time we spent tuning adds up to almost 100 hours per product.
Two of the main factors for stable tuning are thick strings with a lot of tension, and frets that are in very good shape, fine-tuned by a skilled luthier. Probably the most important factor, though, is the technique of the guitarist, who needs to be able to precisely control the finger pressure on each string to avoid sending it sharp. A good guitarist will also intuitively bend strings slightly to tune each voicing by ear individually. For that reason, using capos did not reliably create the best results: in many cases, the guitarist’s skills in fine-tuning the tuning while playing improved the overall stability of the tuning. The pitch of especially the lower strings also increases with higher playing velocity, so it makes a lot of sense to tune the guitar while playing at the same level at which it will be recorded.
We also experimented with different tuners and reference tones. While Guitar Rig’s built-in tuner did a decent job, the quest for the best reference tone isn’t as easy to fulfil. Using a piano sample works OK-ish, but using a sine wave does not work very well, because the only audible beating takes place at the root frequency, which is simply too slow to hear fine deviations. Moving the reference sine wave up by one octave or using a low-pass-filtered sawtooth waveform makes things a little better, but the very best reference tone, which would reveal within a split second even the slightest tuning deviation, would be a perfectly tuned version of the note that’s currently being performed. So we added Antares’ Auto-Tune, set to its fastest retune speed, as a send effect for the monitoring signal path. This makes it possible to add a perfectly tuned version of the very note that’s being played to the headphone monitor feed, in addition to the unprocessed input signal. After testing this method on thousands of single notes, it proved to be more accurate and revealing than any other method.