For many people, faithful capture of an orchestral performance is the Holy Grail of recording. We look back at how innovation in engineering has been driven by the pursuit of this goal.
For several hundred years, notation was the only means of documenting orchestral music. Primarily a means of conveying instructions to musicians, it obviously didn't capture the experience of hearing a performance. Sound recording and reproduction allowed this music to be enjoyed away from the concert hall for the first time, and today we experience recorded orchestral music through media ranging from audiophile digital recordings to television, film, and video-game soundtracks. Some of those forms have, in turn, influenced the music itself, but reaching this point has required a continuous process of innovation. In this article, I'll take you through the key developments of the last century or so.
In the earliest days, the recording process was entirely acoustic. Musicians performed in front of a large, tapered horn that channelled the sound energy towards a diaphragm enclosed in a soundbox at the narrow end of the horn. The resulting vibrations of the diaphragm modulated a cutting stylus, which etched an undulating spiral groove onto the surface of the warm wax disc or cylinder. Because the groove corresponded to the diaphragm's vibrations, the sound information was captured in a physical form that could be played back via a reverse process.
You could view this as analogue recording in its purest form, but the results weren't what we'd call 'pure' today. In fact, there were many things to be refined, but the acoustic recording process suffered from two main limitations. First was the limited range of frequencies that could be captured. Even under ideal conditions, an acoustic recording of this sort was restricted to a bandwidth of roughly 250Hz to 2.5kHz. Second, and probably more significant at the time, was the extremely directional nature of the recording process. In order for their contributions to be picked up, musicians needed to play directly into the recording horn. Efforts to overcome such challenges led to five significant developments during the acoustic era.
Orchestras were rearranged, with musicians placed in unconventional seating configurations, and certain sections placed on risers so that their sound holes (or bells, depending on the instrument) would face the large opening of the recording horn. To optimise the balance in the recording, vocalists, soloists and quieter instruments would be placed closer to the opening of the horn, and louder instruments placed further away or to the side. In extreme situations, louder instruments would be pointed at the back wall, with musicians facing away from the recording horn and watching the conductor in a mirror. (Insert your favourite joke about musicians ignoring conductors here)
Recording rooms of the time were designed to be small and reflective, to contain sound and to direct 'stray' audio energy back into the horn. Sheet music was often suspended from the ceiling by strings, rather than being placed on stands. This was most likely as much to preserve space as to refrain from obscuring the path between an instrument's sound hole to the recording horn.
Because there was no means of monitoring what was being recorded, the acoustic recording process was largely one of experimentation — it was necessary to make numerous test recordings before capturing the final take.
Works were often re-orchestrated for recording, to compensate either for the limited bandwidth of the acoustic recording process or for the lack of space around the recording horn. For example, brass instruments such as tubas and French horns were picked up much better by the recording horn,...