You are here

Page 2: The WSDG AcousticLab: Dirk Noy & Gabriel Hauser

At the Basel AcousticLab, however, auralisations are played back over a spaced array of nine loudspeakers: five are arranged in a conventional surround setup, with the remaining four above them to provide an additional height dimension. "There's this format called Auro-3D, where you have the standard 5.1 setup and then you have a second layer that is higher up," explains Gabriel. "We installed that in the Vienna Symphonic Library Synchrostage control room in Vienna and I listened to some recordings there that they did. We switched on and off this height channel, and it was like the same kind of experience as switching off the surrounds in the 5.1 setup. It was like everything was getting flat. And it's the same kind of dimensional loss that you hear. These four channels really added to the spaciousness of what you perceived. So this experience, and the fact that our acoustic model can provide 5.1 impulse response sets led to the decision to try this kind of setup in our lab."

The Acoustic Lab in Basel, Switzerland, uses an Auro-3D spaced speaker setup, but other configurations are possible.The Acoustic Lab in Basel, Switzerland, uses an Auro-3D spaced speaker setup, but other configurations are possible.

The position and directivity of each speaker in the lab needs to correspond exactly to that of the virtual microphones in the model, so it's necessary to generate a separate impulse response for each. "The impulse response that is calculated is not only weighted with the absorption coefficient of the surface but also with the microphone directivity," explains Dirk. "So that means if sound is being reflected at the rear of the auditorium and comes back from here, it has full level for the surround microphone [corresponding to, say, the left surround speaker in a 5.0 array]. But if it hits this microphone here [he indicates the front left loudspeaker in the array], it gets attenuated via microphone directivity pattern, so that's kind of an additional attenuation for each of those five channels."

Compared with the Basel setup, Dirk and Gabriel describe the Berlin Ambisonics system as "more precise, but less tangible". "With B-format, you have an auralisation that is basically at one point," says Gabriel. "The microphone is at a single point, so there's no time difference between left, centre and right channels as there is in 5.1. This time difference gives you a bit more added spaciousness, I would say."

Translating these impulse responses into something the client can hear is straightforward: they are simply loaded into the convolver plug-in in Magix's Samplitude and applied to the source material. Dirk: "Every channel has its own room simulator with the impulse response that corresponds to whatever direction the speaker's at."

Sight & Sound

Ideally, perhaps, the AcousticLab itself would be an anechoic chamber, to ensure that real room reflections don't get intermingled with the virtual ones. In practice, however, it's located in a well-treated and reasonably spacious room on the ground floor of the WSDG offices. Listening to auralisations in the AcousticLab can be quite an uncanny experience: the relatively subdued acoustic signature of the lab itself disappears, and the virtual space takes over. It also reinforces the extent to which our senses work together. One of WSDG's example projects is a remodelled concert hall at the local music conservatoire. This is quite a lively space, and without any visual cues to go on, an anechoic piano recording heard from the perspective of a virtual listener sounds reverberant and splashy rather than truly immersive. It's hard to sustain a sense of 'being there' when you have only to open your eyes and see that the far wall is not 15 metres away!

However, the same auralisation produces a completely different psychological reaction when you listen while viewing a 3D model of the space on an Oculus Rift VR headset (see opening photo, above). The newfound integration of auditory and visual data suddenly places you in the room with utter believability; as long as you remain in the sweet spot, you can turn your head, look up, look down; and what had previously sounded like a bad recording of a piano now convincingly comes across as a real acoustical event taking place in the hall where you're seated. Even a relatively crude 3D mock-up of a railway station has the same transformatory effect on the listening experience.

This audio-visual experience makes very clear the value of the AcousticLab as a tool for bringing projects to life, and presenting choices to clients in a way that allows them to make subjective judgements. "If the clients say 'That's a neat gimmick. What do we learn from it?' then it failed," says Gabriel. "But if the clients say 'Ah, OK, now I understand what you mean when you say 0.56 STI is better than 0.51,' that will be the perfect scenario."

To generate the auralisations heard in the Acoustic Lab, a single anechoic recording is separately convolved with the impulse response for each virtual microphone in the model.To generate the auralisations heard in the Acoustic Lab, a single anechoic recording is separately convolved with the impulse response for each virtual microphone in the model.

Size Isn't Everything

Client feedback so far has been positive, but it's early days, and there are still some limitations on the realism with which WSDG can reproduce an acoustic environment. Partly this is down to simplifications in the models themselves. "For example, in the acoustic program that we are using, you enter absorption data in octave bands," says Gabriel. "This, of course, is very rough, because between 1kHz and 2kHz a lot can happen. Just having two different kinds of absorption coefficients for these octave frequencies is a rough approximation." Likewise, models currently assume that absorption is constant regardless of angle of incidence, which is not the case with all real materials.

From the point of view of studio design, however, the most significant limitation is probably to do with room size. In a football stadium or railway station, room modes are so low in frequency as to fall well out of the audible range, so there is no need for them to be simulated. In a small control room, by contrast, managing room modes is perhaps the most important and challenging part of the acoustician's job; but WSDG's impulse responses cannot capture this aspect of a room's behaviour. "This kind of algorithm is purely ray-based, using geometrical acoustics," says Gabriel. "It's not a wave-based acoustics simulation program, which means that as soon as we come into the region of eigenmodes instead of a statistical soundfield, the model is not accurate any more. In smaller rooms with a Schroeder frequency of 150Hz, this program is only useful at 250 or 500 Hz upwards, and the low frequencies are not very accurate."

Wave Behaviour

WSDG do, of course, model the low-frequency properties of control rooms, but they use different software to do so. One challenge for the future is to integrate these separate tools, as Gabriel acknowledges. "The quality of a control room is, I would say, 60 to 80 percent about the low end. And that's exactly what we cannot, at this moment, simulate with this program, because it uses geometric acoustics and not wave-based acoustics. But we are also using a wave-based program exactly for calculation of control rooms, low-frequency behaviour, modal response, to know where we have to put what kind of treatment, to make this room sound great. And this program is actually capable of outputting an impulse response, but only sensibly up to the Schroeder frequency — so we could have some kind of crossover frequency where we use the simulation from program A and then switch over to the simulation of program B, and then get a comprehensive picture of what's going on."

"The challenge of the tools, really, is combining geometrical acoustics and wave acoustics," agrees Dirk. "Geometrical acoustics are good for mid and high frequencies, bad for low frequencies. And wave acoustics is good for low frequencies but too much calculation — like really much — for high frequencies. So people need to kind of combine those two in one simulation world and then have a kind of a filter or crossover to manage the results for it."

Recreating a small control room within the AcousticLab would also require its own acoustics to be further optimised, as Gabriel explains. "If the reverberation time of the project is low, we run into problems with our demo room, which is not anechoic. It's just pretty much controlled. So I would say the reverberation time of the project needs to be at least twice what we have here, which means 0.4 seconds [RT60]: so that's medium-sized or large-sized control rooms, medium-sized recording rooms, and higher. But I think we're moving into the right direction with the tools that are available."

In time, most of these challenges could be overcome, and WSDG are already working on some of them. Others are possible in theory, but don't offer enough commercial benefit to justify the enormous investment in time and research. "We are not primarily a research lab!" insists Dirk. "We are not IBM or whatever. We do projects, mainly. This is kind of a support tool to facilitate dialogue between different stakeholders on a project. Sometimes the struggle as an acoustician is to explain to the world what you are doing, because it's very abstract, and the goal really is to create a dialogue enhancer for talking to people about acoustics who have no idea about acoustics."