You are here

Deep Space

Live Surround-sound Performance By John Crossley
Published January 2015

Deep Space

For this ambitious project, John Crossley had a full live band play through a 16-speaker system, to create an immersive performance inspired by the Rosetta spacecraft’s journey through the solar system.

Despite being too young to have experienced Pink Floyd’s quadraphonic gigs in the 1970s, I have been fascinated with the possibilities of using surround sound in live performance. To be honest, though, I have always been a bit disappointed with many of the ‘experimental’ multi-channel experiences — the effect can be impressive, but the choice of content to show off these systems always seems to me to be a bit too, let’s say, challenging!

Why is it that hardly any ‘normal’ pop and rock gigs are presented in surround sound? Is it because it’s too technically challenging? Too expensive? Is the audience too indifferent? I’ve made it my mission to explore these questions and to organise live performances enabling me to try out various surround approaches and see what, if anything, the audience can get out of it.

With that in mind, I set out to write a composition that was eventually performed, in surround sound, in June this year. The project was inspired by the journey of Rosetta, the ‘comet chaser’ satellite, around our solar system, and its 10-year mission to unlock the secrets of our universe. It was to be composed from a spatial perspective and presented it in an innovative way. The performance was designed be an immersive experience, both aurally and visually, thanks to the help of some projected visualisations.

During the course of my research — which was the subject of my masters degree at the University of Derby — I explored several systems, some of which, such as Ambisonics, Wavefield and of course venerable 5.1, will be familiar to readers. But I was particularly intrigued by a system I came across that is offered by a small company based in Cambridge, UK, called Outboard (http://outboard.co.uk who produce and sell a system called TiMax. It is essentially a multi-channel matrixing system which allows the user to route any or all of the inputs to any or all of the outputs, with control over level and, crucially, timing delay. The basic TiMax system consists of 16 ins and 16 outs but can be scaled up to 64x64 I/O.

The two important aspects of the unit are the timing delays, which allow you to take advantage of the precedence effect (see box), and its programmability, which includes the ability to morph between level and timing settings. What all this means is that you can feed any number of speaker channels, with the speakers placed pretty much anywhere (they don’t even have to be symmetrical, though it’s not a bad idea), and you can harness the precedence effect to create positioning and apparent movement of sounds.

Although the TiMax system has been used in a variety of situations it tends to be mainly used in theatre-type installations. Robin Whitaker and Dave Haydon from Outboard were very keen to support the project and kindly loaned me a system to use.

Criteria

I had several criteria before I started. I wanted the concert to consist of original music — I’ve performed concerts before playing covers but I wanted there to be more of a creative challenge (although there were several times during the project when I did wonder what possessed me to make that decision!).

Although I was prepared to use some sequenced tracks, I wanted the majority of the music to be generated live by a band of musicians. Apart from the TiMax system, I also wanted to try to use equipment that’s readily available to all — I didn’t want anybody to be able to say: “That’s all well and good but we don’t have access to nuclear discombooberators,” or whatever.

A Yamaha CL5 console was used for the front-of-house mix, and this was connected directly to the Focusrite RedNet units via Ethernet.A Yamaha CL5 console was used for the front-of-house mix, and this was connected directly to the Focusrite RedNet units via Ethernet.I wanted all the audio (as far as possible) to be distributed around the theatre digitally over Ethernet. The University had the world’s first installation of a Focusrite RedNet system, which uses the Dante protocol (see box) to send multiple channels of audio around an Ethernet system. We had a variety of units which we could use, and although not strictly necessary it made for an elegant and flexible system.

I wanted to record audio and video of the performance, to generate a live stream of the performance on the night and, most importantly of all, I wanted the audience to have a truly enveloping experience, and to demonstrate that all gigs could sound like this!

A late-stage plan showing how all the equipment would be connected.A late-stage plan showing how all the equipment would be connected.Fortunately I was to be able to call upon advice and support from colleagues at the University — particularly from the Sound and Light programme team. I also enlisted a team of students to help set up, rig and operate the sound, lighting systems and film recording.

Kit Lane, who lectures in Sound & Light and Technical Theatre acted as Production Manager to organise and manage the theatrical design and rigging, and he also produced seven amazing videos, one for each track. The videos were projected on to a giant screen behind the band, and they were also used as cutaways in the final edits of the video clips of the performance, which can be viewed at www.youtube.com/syncopateTV.

Planning

The planning stage included organising the technical equipment and personnel that would be needed, and also getting a focus for the musical composition and applying for Arts Council Funding — see box.

Outboard provided the TiMax unit and the University already had several RedNet interfaces, but getting a sufficient number of speakers was a different story. The theatre has a system comprising two D&B Q7s plus two D&B QSubs on each side of the proscenium, with a D&B E12D centre fill, so I needed to find 13 further speakers of sufficient quality and power.

The speakers were kindly loaned to the project by Simon Lewis, a colleague at the University, who runs his own PA hire company. He was able to provide a set of JBL SRX 712 wedges that were flown around the auditorium, and these were driven by Yamaha P7000 amplifiers.

Mark Payne of the SFL Group very kindly loaned us a Yamaha CL5 digital mixer for FOH, and Dante cards to enable the M7 to connect to the Dante network.

The Network rig for the show was put together as a dual-star network topology, using the Dante-enabled devices. The system’s input comprised Focusrite RedNet units (three 4s and a 1), racked up with a gigabit switch. This switch was also connected to the monitor console, a Yamaha M7CL with two Dante MY cards installed. A cable ran to a switch at front of house, which was connected to the Yamaha CL5, a laptop and the TiMax unit. A really useful feature of Dante is that the control software can be used over the same network as the audio traffic, allowing control of the RedNet devices remotely without creating a new network.

In the end, the network supported 32 channels of input, routed to two show consoles and a recording rig. The FOH console then routed two broadcast feeds to the monitor console, which was then output to web broadcast. A ‘rock & roll’ mix and ‘spatial’ audio was routed to the TiMax unit, as well as the recording rig.

The programming setup, with a  scaled-down version of the surround system that was to be used on the night.The programming setup, with a scaled-down version of the surround system that was to be used on the night.In total the network was supporting well over 100 channels of audio data carriage, and all without incident!

Monitor Wizard

The main issue was not to generate too much sound on stage — any sound spilling into the audience would dilute the effect of the multi-speaker system, particularly as the surround effect relies on sound levels across sets of adjacent speakers. Therefore I decided we must use in-ear monitors, and that we wouldn’t use an acoustic drum kit. As the band hadn’t had any experience in playing with IEMs it was important to spend time getting used to playing with them. So I made the decision that all rehearsals were to be done with IEMs. In fact this turned out to be very straightforward; I had a multi-channel/multi-input headphone amp in my setup (a Behringer Pro 8 HA8000), and I took a direct feed or mic from each band member (guitar, bass, vocals, drums, trumpet). Everyone brought their own ear buds or headphones and I was able to set up individual mixes. This enabled me to include the necessary loud click in drummer Ben’s mix for synchronisation, whilst the rest of the band had variations to suit.

The only electronic drum kit we had available was ancient and had solid wooden pads — Ben smiled politely with gritted teeth when I asked him to use it! Fortunately Alan Barclay from Absolute Music stepped in and kindly loaned us a Trapps electronic kit, which had proper mesh heads. I used it to trigger a set of specifically designed acoustic samples in Kontakt running on a separate Mac Mini with a Focusrite Saffire 6 USB interface; this gave me enough outputs for separate kick and snare channels, plus a stereo out for all the other drum sounds.

For the performance we had separate on-stage feeds direct from the Dante Ethernet network into the M7 (stage right) and used individual headphone amps to feed wired headphones for myself, Ben on drums, Ethan on guitar and Kieran on bass, with wireless IEM beltpack systems for Nigel on trumpet and Kay the lead vocalist. As a result our monitor mixes were superb — I did consider generating binaural versions of the surround sound mix so that the players on stage could share the same experience as the audience, but realised that not only was that adding further (unnecessary?) technical complexity, but creating separate monitoring mixes for each player in this way wasn’t really feasible.

Composition

The music was all composed over a two-month period using Apple Logic Pro X. Although mainly composed by me, there were some interesting collaborations; one song was developed with a friend in New Zealand and involved Pro Tools files being sent across the ether. The title track was specially written for the performance by a young up-and-coming songwriter, Madelaine Shepherd; and most impressively I managed to persuade two of the European Space Agency scientists to let me record interviews over Skype and use excerpts in the show.

Knowing that the performance was to be performed in multi-channel surround sound altered the composition process in some quite interesting ways. String sounds and other synth parts were often stacked so that I would have more sound to ‘fill’ the auditorium with. Noises were added and treated with the space in mind, and effects were often set up and manipulated in a similar way. Thinking about left-right delays became circular! And having the height channels opened many interesting possibilities for sounds and effects. An example of this was the multi-layered ‘take-off’ effect in the first song. This was based on the actual sounds of the Rosetta rocket taking off from the ESA’s promotional video, but augmented with about 10 tracks of rocket sounds and filtered white noise, giving plenty of scope for sound positioning and movement, which climbed up in sync with the rocket in the video!

The finished compositions were then bounced down as individual tracks and transferred into Pro Tools — both for rehearsals and the show itself. Even though I prefer writing in Logic, Pro Tools for me is a must for recording and mixing. Using these tracks I was able to organise ‘partial’ rehearsals where necessary, and produce Soundcloud versions with parts missing for the band to learn along to. It also meant I could programme and rehearse the surround-sound setup without the band being needed on every occasion.

Get With The Programme

For the programming of the surround system I needed to create a ‘working scale model’ of the speaker setup in the theatre. I had my own idea of where I would like the speakers to be positioned, but I arranged a site visit to establish the practicalities of locating and, in several cases, flying the speakers.

Deep SpaceDerby Theatre is a typical ‘Playhouse’-style venue. The stage is about 14m wide by 10m deep with a 10m-wide proscenium opening. The auditorium is about 14m from rear wall to front of stage, and between 10m and 14m wide. The average height of the auditorium is roughly 7m.

With these measurements I was able to set up a replica in a rehearsal room at the University. This was critical to be able to programme position and movements of sounds that would translate accurately on the night. The setup included two speakers flown overhead — which I hadn’t tried before! I had used this method of creating a scaled down version of the performance space and speaker positions before with TiMax, and images and movements made this way tended to translate pretty well — although some tweaking is usually necessary when we get in the ‘real space’.

Using performances I had captured during early rehearsal and some of the sounds I generated during the composition process, I was able to mock up a close approximation of the whole gig sound, which was very helpful in making decisions about balance, positioning and movement of all the sounds.

Stage Planning

I am a great believer in diagrams, plans and lists — I’m one of those people who fires up Excel whenever I’m thinking of buying a new effects unit/car/holiday! About halfway through the project I needed to visualise how it would all connect together in the hall, how we would use RedNet and the digital desks, and also what equipment we would need and where we would be short of resources. I also like to present things visually; it helps me and it certainly helps discussions with others. The plan shown is about version five, I think, and it’s pretty close to the actual setup used on the night. It was much easier to send this as a reply to the Production Manager when he asked: “What gear are you using and where will it need to be placed?”

There were still questions that needed to be answered, however. How many inputs did we need on stage? Could we record the live show, and if so, how? Which parts of the rig would be handled digitally over Ethernet, and which would be analogue?

As far as inputs were concerned, we planned on 24, using three RedNet 4s (each with eight preamps). For the show we actually added an extra RedNet 2, which gave me potentially 40 inputs, as I ended up having a few more outputs from my rig than I anticipated.

The recording turned out to be quite straightforward; a Mac Pro with a RedNet PCIe card running Pro Tools took care of that. In the plan the recording setup is shown at the back of the hall, but in reality it was on stage with me — I felt more confident that way! Of course that’s part of the beauty of using digital audio over Ethernet — all the live feeds were available anywhere in the room.

The TiMax unit at the top of this rack was used to apply the required delays to all 16 speaker outputs.The TiMax unit at the top of this rack was used to apply the required delays to all 16 speaker outputs.The RedNet 3 shown in the plan at the FOH position wasn’t needed in the end, as we were fortunate to get the CL5 mixer with built-in Dante interfacing. The only analogue connections were inputs on stage from instruments, outputs from the monitor desk to the IEM transmitters, and the outputs from TiMax to the speakers. The MIDI cable connecting my on-stage Pro Tools rig to the TiMax unit to trigger cue points was extended using a couple of MIDI-to-XLR adaptors and a long XLR mic cable — and it worked beautifully. I was considering sending the MIDI wirelessly but chickened out in the end; you can have too many variables!

Take It To TiMax

TiMax is a multi-channel programmable matrixing unit with control over levels and delay. The unit I was using had 32 channels and was equipped with Dante, as well as analogue outs for the speaker feeds. The basic idea is to use delays and take advantage of the precedence effect (see box) to create positioning of audio sources. In actual fact the unit comes equipped with a built-in hard drive for playback cues, which could be used for backing tracks and sound effects, cues and so on. However, I was using it in a ‘live’ mode, treating live inputs from my own playback and sources on stage.

The first part of the process, once your speakers are in position, is to set up a collection of ‘images’. These are effectively snapshots of levels and delays designed to ‘position’ a sound source in a particular place. These images can then be dropped onto any input, triggered in a cue or used as a starting, ending or passing point in a movement, with the TiMax morphing between the images.

I always start by creating a complete set of images that are going to cover all my possible needs. I then nominate the TiMax inputs for various ‘jobs’ (although these are quite dynamic), so I might have inputs 1+2 be a static wide-panned stereo image, inputs 3+4 to be left and right but in the middle of the auditorium, 5+6 to be wide-panned rear sounds, and so on. Then I will have several channels that are used for dynamic movements or specific channels or instruments. These can have their positions altered by using cues to trigger the morphing actions.

I was also really excited to be using a feature that’s appeared in the latest software release called ‘Panspace’. In this, there is a two-dimensional screen where you first place your prepared images — you are able to drop in a JPG of your actual space! So you can position your images in the venue, and then create a path around and through your space by clicking ‘hit points’ with the mouse. Note that, as well as the 14 speakers positioned roughly in a flat circle, I had two speakers flown ‘at height’ above the audience, and I was able to send sounds up above during their paths.

I was then able to trigger these movements using MIDI from on stage. For this to work the TiMax has to be the final piece of kit that connects to the amplifiers/speakers. This means, of course, that you need to have 100 percent confidence in the system. Having used it a few times I can say I’ve never had any problems: it seems to be very well engineered, it has a redundant Dante port and power supplies, and can operate fully without a computer connected.

Happy Landings

So did it all work, was it all worth it and what did the audience think? Well, from a sound point of view I was really happy with the outcome. Certainly it’s a big ‘wow’ to have sounds moving around the auditorium, but for me it was the actual sense of space and envelopment that impressed me the most. Having a choir or string section spread around you with delays and reverbs correctly positioned is a tremendous experience, although to be honest a lot of that can be quite subtle — until you go back to a stereo track and you ask, “Where has all the sound gone?”

In order to maintain the spacial illusion created by the FOH system, on-stage sound had to be kept to a  minimum — which meant using in-ear monitors rather than stage wedges.In order to maintain the spacial illusion created by the FOH system, on-stage sound had to be kept to a minimum — which meant using in-ear monitors rather than stage wedges.

That’s the other big difference with a ‘standard’ rig: the whole hall is filled with sound, without blasting the audience from the front — we’ve all experienced those times when it’s too loud at the front and not really loud enough at the back! With this sort of system there’s an even sound level throughout the auditorium, and the overall sound is distinctly similar to listening in your own living room at moderately loud levels — loud enough to be exciting but not so loud as to be uncomfortable.

It was quite a lot of work, although much of that was in the pre-production and planning. In turn this meant that the venue setup was not too different to a ‘normal’ gig — just lots more speakers and with a few extra bits of kit involved. So could any band do this easily? Well maybe not easily, but it is certainly doable, and it would be relatively easy to repeat at different venues with a few tweaks each night.

What about the audience? I talked to many of them after the show and we distributed questionnaire sheets and had online questionnaires available; the feedback I got was really gratifying. Everyone who responded really enjoyed it, and we had comments such as “This would make me go to a lot more concerts,” and “It sounded fantastic — I was really in the middle of the sounds.”

So, would I do it again? How does next week fit your schedule? This time, let’s have 24 speakers!

Precedence Effect

The precedence effect, also known as the ‘law of the first wave front’, describes how, when two sounds from different locations are heard in quick succession (1-5ms for simple sounds with fast transients, up to 40ms for more complex sounds), fusion occurs and the brain can’t tell them apart. The brain perceives the location as being defined by the sound that it hears first. It’s this phenomenon that is used to localise sounds in reverberant spaces. By delaying sounds spread across several speakers, it is possible to ‘fool’ the brain as to the location and/or movement of the sound.

Dante

Dante is a trademark of Audinate (www.audinate.com), and it is a network technology that allows the transmission of very low-latency multi-channel uncompressed digital audio over standard CAT 5e or CAT 6 Ethernet cable. It has been adopted by several audio manufacturers, such as Focusrite, Yamaha, Allen & Heath and Soundcraft. It can operate in ‘Unicast’ mode (with point-to-point connection) or as a ‘Multicast’ system, which can send audio streams to several devices simultaneously.

Simon Durbridge (FOH) talks about the Dante setup for the show: “All signal routing for this network was determined by use of the Dante network control software. All devices were instantly recognised by the network controller, and the network configuration process was hassle free. All devices were set to run at a uniform implied latency of 5ms, and at a sample rate of 48kHz. Dante works via a method of instant recognition, where devices connected to the network determine addressing information automatically, and devices are automatically configured in line with the rest of the network.

A screenshot of the TiMax control software, showing panning ‘hit points’ being used to create an elaborate back-to-front ping-pong effect.A screenshot of the TiMax control software, showing panning ‘hit points’ being used to create an elaborate back-to-front ping-pong effect.“After a discussion with Will Hoult at Focusrite, the RedNet units were configured to output in Multicast mode, which reduced network traffic. The units were controlled remotely using a laptop and control software from the front-of-house position. The units were easy to use and flexible, and the control software allowed me to see what was going into the stage boxes remotely, which was very useful.

“The Yamaha consoles were set up to be configured by the Dante master unit, which was the control software run on the laptop. After some tweaking, the network ran smoothly. The most surprising thing about the adventure of orchestrating so much data transfer is that, more or less, everything just worked. Not only that, everything kept working. The TiMax unit was a very impressive and powerful piece of kit.”

Probe Flight

By the time you read this, the European Space Agency should hopefully have successfully landed a small probe onto the surface of the comet Churyumov-Gerasimenko. I first became conscious of the Rosetta satellite early this year when it was due to be woken from hibernation. The more I read about the 10-year mission the more amazed I was, and I realised it would make a great inspiration for this project. It had all the ingredients; technology, space and a dash of derring-do! The fact that I managed to get two of ESA’s top scientists involved was an extra bonus. The songs and music loosely tell the story of the satellite, its mission and what may happen when it finally disappears around the far side of the sun.

Arts Council Funding

There are a few opportunities for funding arts activities if you search hard enough! My first port of call was the Performing Rights Society Foundation (www.prsformusicfoundation.com), which has a variety of grants available for music-related activities. I wasn’t successful this time as it’s quite competitive, but it’s certainly worth checking out — your project might be just what they’re looking for.

I then approached the Arts Council and their Grants For Arts programme (www.artscouncil.org.uk/funding/grants-arts), and this time I was successful and managed to get a grant that helped cover marketing costs and so on. It also meant that the performers could all get paid!

I have to say that putting together a grant application is not a walk in the park. There are criteria you have to hit, targets you have to achieve and reports to write. Be prepared to do a lot of research and several re-writes.

See (& Hear) For Yourself!

On the night, we had students operating three cameras feeding into a Tricaster vision mixer for a live video feed, which was streamed on the Internet. In order to give the audience out there a taste of what was happening in the theatre, we set up a binaural dummy head, which was added as an alternative feed which listeners could enjoy on headphones.

You can watch videos of the performance with both stereo and binaural soundtracks, as well as a short documentary about the show, at www.youtube.com/user/SyncopateTV.