For many people, faithful capture of an orchestral performance is the Holy Grail of recording. We look back at how innovation in engineering has been driven by the pursuit of this goal.
For several hundred years, notation was the only means of documenting orchestral music. Primarily a means of conveying instructions to musicians, it obviously didn't capture the experience of hearing a performance. Sound recording and reproduction allowed this music to be enjoyed away from the concert hall for the first time, and today we experience recorded orchestral music through media ranging from audiophile digital recordings to television, film, and video-game soundtracks. Some of those forms have, in turn, influenced the music itself, but reaching this point has required a continuous process of innovation. In this article, I'll take you through the key developments of the last century or so.
In the earliest days, the recording process was entirely acoustic. Musicians performed in front of a large, tapered horn that channelled the sound energy towards a diaphragm enclosed in a soundbox at the narrow end of the horn. The resulting vibrations of the diaphragm modulated a cutting stylus, which etched an undulating spiral groove onto the surface of the warm wax disc or cylinder. Because the groove corresponded to the diaphragm's vibrations, the sound information was captured in a physical form that could be played back via a reverse process.
You could view this as analogue recording in its purest form, but the results weren't what we'd call 'pure' today. In fact, there were many things to be refined, but the acoustic recording process suffered from two main limitations. First was the limited range of frequencies that could be captured. Even under ideal conditions, an acoustic recording of this sort was restricted to a bandwidth of roughly 250Hz to 2.5kHz. Second, and probably more significant at the time, was the extremely directional nature of the recording process. In order for their contributions to be picked up, musicians needed to play directly into the recording horn. Efforts to overcome such challenges led to five significant developments during the acoustic era.
Orchestras were rearranged, with musicians placed in unconventional seating configurations, and certain sections placed on risers so that their sound holes (or bells, depending on the instrument) would face the large opening of the recording horn. To optimise the balance in the recording, vocalists, soloists and quieter instruments would be placed closer to the opening of the horn, and louder instruments placed further away or to the side. In extreme situations, louder instruments would be pointed at the back wall, with musicians facing away from the recording horn and watching the conductor in a mirror. (Insert your favourite joke about musicians ignoring conductors here)
Recording rooms of the time were designed to be small and reflective, to contain sound and to direct 'stray' audio energy back into the horn. Sheet music was often suspended from the ceiling by strings, rather than being placed on stands. This was most likely as much to preserve space as to refrain from obscuring the path between an instrument's sound hole to the recording horn.
Because there was no means of monitoring what was being recorded, the acoustic recording process was largely one of experimentation — it was necessary to make numerous test recordings before capturing the final take.
Works were often re-orchestrated for recording, to compensate either for the limited bandwidth of the acoustic recording process or for the lack of space around the recording horn. For example, brass instruments such as tubas and French horns were picked up much better by the recording horn, and would sound louder on recordings than stringed instruments like guitars, violins and celli. Celli were often substituted or doubled by a bassoon or bass clarinet, and double basses by a tuba or contrabassoon.
An unusual-looking instrument called the Stroh violin, or Stroviol, was invented in 1899 specifically to meet the need for extreme directionality in the studio environment. Since the traditional high-strings instruments (the violin and viola) have sound holes which face upward — perpendicular to, rather than parallel with, the mouth of the recording cylinder — they were ideal candidates for reinvention. The Stroh violin used a horn to project the sound of the instrument toward the horn/audience. (See the 'Stroh Instruments' box.)
Finally, the limited dynamic range of the recording mechanism meant musicians had to play much louder than in a concert setting, so musical dynamics were deliberately flattened out during performance. While this allowed more sounds to be captured and played back, it came at a cost: it led to recordings that were mere caricatures of what had been intended by the composer.
The Stroh violins and viola were a radical redesign of the violin and viola by John Matthias Augustus Stroh, an electrical engineer from London, whose designs cleverly borrowed from the technology of the phonograph itself.
The instruments employed a large aluminium horn that could be pointed towards the recording horn, which made the instruments both louder and more directional. The first and second violins and viola in an orchestra would often be reinforced, and in some cases substituted entirely, by two Stroh violins and a viola. This was a standard practice by 1905 and continued until the end of the acoustic era in 1925. (Other instruments, including the cello, double bass, ukulele, mandolin and guitar, were treated to a Stroh 'makeover', but were less commonly used.)
With no means to monitor what was being recorded, the acoustic recording process required numerous test recordings to be made before the final take was captured. During the test recordings, musicians would be rearranged to establish final positions for the orchestra. This was also a time to experiment with dynamics, to get a sense of how the music should be performed to reach an acceptable level for the recording. The test recordings would later be listened to by the conductor and technicians before final recordings were made. Modern engineers often take time out of a session to experiment with mic types and positions, but in the acoustic era the testing process could last many days!
Perhaps the most fascinating aspect of the acoustic recording process was how closely tied the results were to the quality of the recording medium. Imperfections in the blank wax records frequently caused problems in acoustic recordings, sometimes leading to entire sessions being rejected. To be useful for recording, the surfaces of wax blanks needed to be soft enough to be cut by the recording stylus, but rigid enough to not become a big waxy mess as they spun around on the plate. To keep the blanks' surfaces ready for recording, their surface temperature needed to be regulated, which was achieved by storing them in special warming cabinets during the recording session.
The electric era of recording owes its birth largely to three innovations created through collaboration between Bell Laboratories and Western Electric Research (often known simply as Western Electric). The first fruit of this partnership was the capacitor or condenser microphone, developed in 1916 by Edward Christopher Wente. Initially developed for long-line telephone transmission, the condenser microphone had properties which made it ideal for recording. Even early versions could capture frequencies up to 6kHz (compared with 2.5kHz for acoustic recorders) and later in this era could manage up to 15kHz.
Another innovation, in 1914, came courtesy of Western Electric engineer Harold Arnold, who developed an amplifier that significantly improved on previous designs, with a vacuum replacing gas, and with redesigned electrodes and filaments. This allowed the amplifier to have low distortion and improved linear amplification, and by the early 1920s the system was able to capture frequencies from 50Hz to 6kHz — still limited, but superior to the acoustic process.
The third was a Bell Labs invention called the 'rubber line' recorder, which was a new method of cutting sound waves onto the recording medium. In this system, the electrical output of the matched-impedance amplifier was fed to an electro-magnet, causing the stylus to move according to the changes in the magnetic field of the electro-magnet. The varying electrical output of the amplifier going to the stylus via the moving magnet would, in turn, inscribe the musical waveform into the wax master.
These combined to form a new electrical recording process which captured a wider bandwidth and a more realistic sound image, and with reduced harmonic distortion and a lower noise floor. These improvements in technology meant that it was now possible to capture and reproduce orchestral works that were played as intended. In other words, there was no longer a need to re-orchestrate or to substitute instruments. It also meant that, for the first time, percussion and timpani could be included in recording sessions; both were omitted in the acoustic era, since they tended to cause the stylus to jump out of the groove, ruining the recording.
The honour of the world's first electrical recording of an orchestra belongs to Leopold Stokowski and the Philadelphia Orchestra, with their 1925 recording of Camille Saint-Saëns' Danse Macabre, again for the Victor label.
The next major innovation that would also have major implications for recording the symphony orchestra was the invention of stereo recording. In 1931, EMI engineer Alan Blumlein (who would go on to contribute to a number of other innovations in various fields) invented a method for recording in stereo, and successfully demonstrated it at London's Abbey Road Studios. The story goes that, after a night at the movies with his wife, Blumlein was frustrated by the fact that the sound could not follow the direction of the actors as they moved across the screen. He quickly declared that he would find a way to make the sound follow the actor, and began working on a binaural system immediately.
In 1931, he filed a patent titled 'Improvements in and relating to Sound-transmission, Sound-recording and Sound-reproducing Systems'. His patent mentioned some 70 improvements to the sound recording process, but three in particular were necessary for the creation of binaural sound, and were immediately useful in the development of the stereo recording process.
First was the Blumlein Pair, a crossed-pair stereo mic array formed of two figure-of-8 microphones angled at 90 degrees to each other and mounted in close proximity along the vertical axis. With this 'Blumlein technique', a sense of realism is created, and the listener feels as though they are in the acoustic sound field. Second was a 'shuffling' circuit that processed the recording in a way that allowed more accurate recreation of the stereo image. Last, but certainly not least, was a system that would allow a gramophone record to be cut with two grooves that could be read simultaneously. Importantly, this was a system that not only allowed playback of stereo sound, but which could also play existing mono records. One of the first-known experiments in stereo recording with a symphony orchestra was the 1934 recording of Mozart's Jupiter Symphony, conducted by Sir Thomas Beecham at Abbey Road.
On the other side of the pond in the United States, Harvey Fletcher of Bell Laboratories was also investigating techniques for stereophonic recording and reproduction. Several stereophonic test recordings, using two microphones, connected to two styli cutting two separate grooves on the same wax disc, were made with Leopold Stokowski and the Philadelphia Orchestra at Philadelphia's Academy of Music in March 1932. The first (made on March 12, 1932), of Scriabin's Prometheus: Poem Of Fire, is the earliest–known intentional stereo recording that survives. Unlike the Blumlein recordings, which used his newly patented Blumlein array, these ones were made using spaced pairs. The spaced array offered a more exaggerated stereo image, albeit one that offered less precise localisation and tended to form a 'hole in the middle' under some circumstances.
Experiments in magnetic recordings go back as far as Valdemar Poulsen's early demonstrations of steel wire recordings in 1899, but wire recorders never sounded very good! While several attempts were made to improve on the quality of wire recording, the real breakthrough in magnetic recording came in 1928, when a magnetic tape recording system was developed. The recording medium was long paper strips coated with a magnetisable powder. By 1935, the technology had been developed well enough for it to be shown to the public: the Magnetophon K1 was unveiled at the Berlin Radio Fair in August that year.
One of the first concerts to be recorded on a Magnetophon was Mozart's 39th Symphony. Conducted by Sir Thomas Beecham and played by the London Philharmonic Orchestra during their 1936 concert tour, the performance was captured on an AEG K2 Magnetophon. Over the years, as the technology was improved, the Magnetophon eventually reached an upper limit of 10kHz, which was a major improvement on the electrical disc system. The biggest breakthrough was the use of AC bias; invented independently in several countries, it was fully developed by German engineers during the Second World War and offered greatly reduced noise and distortion.
Due to its significant advantages over the electrical process (including not only the fidelity, but also the ability to extensively edit recordings) and a much simplified reproduction process compared with wax, magnetic tape was universally adopted in the recording industry by the late 1950s. While improvements to the process would continue to be made (not least multitrack recording) magnetic recording would become a mainstay of the recording industry for the next four decades or so. The next innovations in orchestral recording, then, would not be a new medium of recording, but new approaches to capturing the stereo image (and beyond).
The Blumlein pair, like other coincident approaches to recording, relies on intensity differences between the signals captured by the two microphones. It results in strong stability and clear articulation of the stereo image, but it's not without compromise. Its biggest drawback is a tendency to be perceived as 'dry' or 'sterile' compared with spaced pairs. But while spaced pairs create a greater sense of spaciousness, that comes at the expense of articulation across the stereo image. The next inevitable phase, then, was to find a new technique that would combine the best qualities of both.
In March 1954, engineers Roy Wallace and Arthur Haddy at Decca Studios in London came up with an ingenious method of overcoming the shortcomings of a spaced pair. Wallace assembled a T-shaped steel mount, and attached a Neumann M49 microphone at each of the three ends. The array was suspended from a large studio boom and, upon looking at the array, the two engineers joked that it "looked like a bloody Christmas tree!" So the 'Decca Tree' array was born.
Over the years, many modifications to the Decca Tree would be made (which I hope to discuss in a future article), but the main principle was that a spaced pair was joined by a third, centrally placed microphone — and that central mic remains the nucleus upon which all other variants are based. Decca Trees remain the array of choice for film, television and video-game soundtrack recordings, due to their ability to reproduce a wide, articulate stereo image.
For most classical orchestral recordings, nothing more is needed than the stereo array of choice to capture the natural sound of the musicians in the room, and many classic Deutsche Gramophone or Decca recordings would have been made with nothing more than a Decca Tree. But orchestras are used in many other genres and, for example, pop and rock music and film scoring often have very different demands, in particular requiring a more focused mid-range. Modern recording setups, from the 1970s onwards, have thus evolved beyond the Decca Tree, augmenting it with a number of spot (close) microphones to allow clearer articulation of individual sections or instruments. This may include at least a few microphones on each of the individual sections of the orchestra, which can be mixed to taste alongside the main Decca or room array by the recording/mix engineers.
The first experiments in digital recording go back as far as the late 1960s, but it was not until the 1990s that digital recording in the form of digital audio workstations became ubiquitous in recording studios and scoring stages (recording studios with large rooms purpose-built for the recording of orchestras). The advantages of digital recording for classical music are obvious, and include low noise, low distortion and greater dynamic range. And producers of pop and soundtrack music, in particular, have taken advantage of its limitless track counts and editing ability: it's common for a pop or soundtrack orchestral session to feature at least 24-32 tracks, while blockbuster film soundtracks often feature several hundred.
Perhaps the most recent relevant development is the digital microphone, whereby the preamplification and A-D conversion are placed as early as possible in the signal path (in the mic body), enabling noise and transmission loss to be reduced even further, and a few other practical advantages such as better remote control.
In the century or so since it first became possible to capture and replay the performances of a symphony orchestra, then, we've moved well beyond the ability to simply document orchestral performances, to a place where we produce larger-than-life productions that would have been the stuff of dreams for late Romantic composers like Berlioz and Wagner!
The challenges that the acoustic recording process presented didn't deter enterprising record labels and adventurous orchestras from trying their hand at studio recording during this period, and many successful recordings were produced. Here are some notable recordings that are worth checking out, along with YouTube links.
Odeon Nutcracker (1909): In 1909 Odeon Records, a record label founded in 1903 by Max Straus and Heinrich Zuntz of the International Talking Machine Company in Berlin, Germany created the first recording of a large orchestral work — and what may have been the very first record album — when it released a four-disc set of Tchaikovsky's Nutcracker Suite with Hermann Finck conducting the London Palace Orchestra.
Beethoven's Fifth Symphony (1913): Another significant milestone in the history of orchestral recordings was the 1913 recording of Beethoven's Fifth Symphony by Arthur Nikisch for Gramophone. While many successful orchestral recordings had taken place prior to this, this particular recording had the distinction of being the first time a star conductor and a professional orchestra had recorded a full-length work, without the instrument substitutions or re-orchestrations that were so common in this era.
The Stokowski Acoustic Recordings (1917): Beginning in 1917, Leopold Stokowksi and the Philadelphia Orchestra began what would become a more-than-seven-year adventure, resulting in over 450 acoustic recordings. Due to the extremely unpredictable nature of the acoustic recording process, a mere 16 percent of the recordings were deemed acceptable!