You are here

John Chowning

Pioneer Of Electronic Music & Digital Synthesis By Gordon Reid
Published September 2015

John Chowning at the Vox Festival. Photo courtesy Histeria.John Chowning at the Vox Festival. Photo courtesy Histeria.

A visionary in the field of electronic music, John Chowning invented FM synthesis and set up one of the world’s most influential research centres.

Although its name is reasonably well known in the UK, I suspect that few people on this side of the pond know precisely what the Berklee College of Music is, or what it does. But perhaps that’s about to change as it continues to extend its reach beyond its ancestral stomping ground in Boston (Massachusetts, not Lincolnshire), and moves into Europe.

Founded in 1945, Berklee was the first school in the USA to teach the popular music of the day (jazz) and, in 1954, was perhaps the first to recognise the electric guitar as a serious instrument. The current name was adopted in 1970 and further innovations followed, including the world’s first degree in scoring for film (1979) and the world’s first college-level degree ‘major’ in music synthesis (1984). No doubt an affront to the conservatives of the music world, the college also has two record labels that promote its rock, pop and jazz musicians, runs a major in music therapy, and includes hip-hop within its curriculum. More recently, in 2012, Berklee opened a campus in Valencia (Spain, not California), and in June 2015, the Valencia campus inaugurated the Vox Festival to celebrate the marriage of music and technology.

Keynote speaker and performer at this year’s inaugural Vox Festival was John Chowning. Sound On Sound readers will know Chowning as the discoverer of FM synthesis, but he is also a renowned pioneer in the field of electronic music, and both his musical and technical achievements extended well beyond his most famous work. Indeed, his Center for Computer Research and Musical Acoustics (CCRMA) at Stanford University has been a hotbed of innovation for more than four decades.

The Samson Box

Two days after the Vox Festival, I talked to Chowning. We began by discussing the early days of electronic music and some of the systems that were used for developing it. He told me: “The photo shows the five of us who founded CCRMA in 1974 — Leland Smith, John Grey, Andy Moorer, Loren Rush and myself. We also worked with Pierre Boulez, who consulted us regarding the creation of IRCAM in Paris in 1977, and even installed the same computer as we were using at that time [a DEC PDP10] so that the two centres could share software.”The founders of Stanford University’s Center for Computer Research & Musical Acoustics. From left: Leland Smith, John Grey (standing), James Moorer (sitting), John Chowning, Loren Rush. This photo was taken in 1975, the year after the Center’s foundation.The founders of Stanford University’s Center for Computer Research & Musical Acoustics. From left: Leland Smith, John Grey (standing), James Moorer (sitting), John Chowning, Loren Rush. This photo was taken in 1975, the year after the Center’s foundation.Photo: Patte WoodPierre Boulez with the CCRMA team in 1975.Pierre Boulez with the CCRMA team in 1975.Photo: J Mercado

Another computer that proved to be instrumental in the development of digital music was the Systems Concepts Digital Synthesizer, the so-called ‘Samson Box’ developed by MIT graduate Peter Samson and his team. “It was a major cost for us”, explained Chowning, “but we decided that this would have the broadest impact on the future of our work at that time. We started using it in 1977, and finally shut it down when Apple developed their Unix system based on the Motorola 68000 processor. It probably produced more minutes of music than any digital synthesiser before the DX7 appeared, although the Yamaha was a quite different device and wasn’t programmable to the same degree as the Samson Box.

“Some wonderful music was done on the Box. Mike McNabb composed his ‘Invisible Cities’ on it, and Bill Schottstaedt (one of our composers and research staff at CCRMA), Davis Jaffe and I all produced a number of pieces that became pretty prominent at that time. We also took some pieces from earlier years, such as my composition ‘Turenas’, and rewrote them in Samson Box code, and re-recorded them through the higher-quality DACs that we now had, although we still had to record them to analogue tape because we didn’t have access to digital recording.”

Get In The Queue!

The way in which digital music was created at that time was a far cry from the modern world. “Although the Box was a computer highly optimised for digital signal processing, we didn’t control it in real time because we decided to make it accessible to everyone, and ran a time-sharing environment so that most of the time in composition was spent in preparing the command files for the device. Once those files were written, the music — four channels of audio with integrated reverberation — could be produced in real time and recorded to analogue tape. The Box then became available to the next user in the queue. Running it as an assignable device like a computer printer avoided the problems that would have occurred if we had run it in a studio in which one user could tie it up for hours on end.

“Interestingly, about six years ago, Bill Schottstaedt decided to build an emulator on a modern Unix system, so we can now run the command files to re-record the pieces that were created (or recreated) on the Samson Box but with 32-bit, floating-point precision.”

I asked him whether there was any truth in the common belief that the technical limitations of the original system had contributed in some way to the character of the music, or whether the new recordings were superior. “There was a remarkable clarity and tonality in the new recordings that wasn’t present in the originals, and of course a lack of tape hiss, but we heard artifacts that hadn’t appeared in the originals, so we had to compensate for these. For example, I had a low-frequency tone in ‘Turenas’ that began with an instantaneous attack. In the emulated version, we heard a click that wasn’t supposed to be there, so we modified the data to create a little slope to make it go away. So we have to manage aspects of the modern technology so that the listener still hears what the composer intended.”

The Next Step

Steve Jobs’ Next computer was perhaps the first to be manufactured with high-quality audio intrinsic to its design, and Jobs asked several people from CCRMA — Julius Smith, David Jaffe and Mike McNabb, amongst others — to work on aspects of its signal processing. “We had gained a great deal of experience in synthesis and sound processing using the Samson Box, and those ideas were recruited into the Next computer,” explained Chowning. “When it was launched, it briefly became a new standard, and Perry Cook wrote a wonderful programme for it called Sheila, which was a simulation of the female singing voice. This helped to propel the Next into the mainstream of the computer music world for a while. There was some lovely music composed on it but, soon after that, general computers became so powerful that, in the ’90s, there was no longer a requirement for special systems. It’s a different world these days.”

I asked whether he felt that the constraints of the early hardware and software had aided or inhibited creativity. “It aided it, there’s no doubt about that. There was a noticeable difference in the way people worked back then. Today, people tend just to ‘do things’, often with not a great amount of thought — more like children in the playground. The idea of careful calculation in composition has now become much less important because the cost of a mistake is negligible, whereas the cost of synthesis on a big time-shared computer, and therefore the cost of a mistake, could be enormous. So the amount of care one applied to the building of ideas was very much greater in the past, and there was probably something lost when we moved to real-time computation. But, on the other hand, the iteration of ideas and their rapid development is now much quicker, so that’s another kind of benefit, and I don’t know how to weigh those two against each other. But I still work with modern computers much as if I were submitting a job in the 1960s or 1970s; I tend to think through what I’m doing as if the cost of a mistake would be a 24-hour delay before I would have the chance to hear the intended sound again. This makes composition quite slow, but there’s a payoff in being able to experiment in a complicated software environment that I can control myself. As Max Mathews once said, ‘I like to do my own programming because I then know to whom to go if there’s a problem.’ I really enjoy coding, and am now working in Max/MSP. I’m not a great programmer, but I can get the computer to do the job that I ask it to. So when I started composing the piece that we performed at the Vox Festival on Thursday, I started building my own patches, processing objects and arithmetic objects in Max/MSP units as I wrote. It was a very rich experience for me. I felt that Max/MSP became part of the poetry of the writing process — I never felt detached from the making of the music.”

Code & Poetry

I asked whether the tools and structures of Max/MSP had therefore caused his music to take different directions than might otherwise have happened. “Yes. When I was writing ‘Stria’, I discovered quite by accident that a procedure could be recursive. I didn’t have the mathematical background to know quite what this meant, but when I discussed it with one of the programmers in the AI lab, I realised that it had direct musical application. So I used recursive pitch structures in ‘Stria’, which was a very powerful and musical idea. For me, Max/MSP is comfortable and, in some sense, cuddly. As I said, I’m not a super-competent programmer, but I’m able get done what I need to get done, and it’s all poetry to me.”

Hearing this, I suggested to Chowning that his image as a technologist might be at odds with how he sees himself as a musician and composer. “Exactly. I love working on a piece. I probably worked for 18 hours a day for six months to get ‘Voices’ ready for its first performance but, if I had been solving the same sorts of problems for an insurance company, I would have quit in an instant. The fact that the result was music made it a deeply passionate endeavour.

“So it was with FM synthesis, which was altogether a discovery of the ear, not one of mathematics. I was working on spatialisation, so I needed sounds that would localise, sounds that had some form of dynamism so that they could be distinguished from the reverberant field. Pitch-modulation seemed to be the most salient feature of the sound that would allow me to do that, so experimenting with vibrato was an obvious thing to do, and I just kept going until I realised that I was no longer hearing changes in the time domain, but rather I was hearing changes in the frequency domain. So everything in my work has been driven by my ear to a musical end.”Preparations for a  concert at CCRMA, 1981.Preparations for a concert at CCRMA, 1981.Photo: C Painter

Bring Me MUSIC

Of all John Chowning’s many achievements, it’s his role in developing FM synthesis that will perhaps be best known to SOS readers. The story of FM synthesis began at the Bell Telephone Laboratories in the ’50s, where a gentleman by the name of Max Mathews began experimenting with digital computers to see whether they could become a viable means for generating audio signals. Mathews was far ahead of his time, if only because he realised that, unlike the analogue signal generators of the time, computer-generated audio could be consistent and controllable. So, in 1957, he wrote a program called MUSIC I, which he programmed in assembly code for an IBM 704 valve computer. MUSIC I had a single triangle-wave oscillator and was capable of generating only the most basic sounds so, in 1958, Mathews wrote MUSIC II, which incorporated four of these oscillators. MUSIC III soon followed, and then MUSIC IV (1962), MUSIC IVF (written by Arthur Roberts in 1965) and MUSIC IV BF (1966/67).

Meanwhile, beginning in 1964, John Chowning and Leland Smith — two researchers at Stanford University’s computer department — were working on a new version that, with gruesome logic, they called MUSIC V. At the same time, Chowning was researching the localisation of sounds and applying vibrato to the signals generated by his digital oscillators. Apocrypha has it that he accidentally programmed a modulation that was larger and faster than he had intended, and discovered that the result was not vibrato, but a new tone unlike anything he had heard before. Apparently, Chowning was unaware that he had stumbled across a technique used to broadcast radio transmissions and, by modulating a signal in the audio band, he was the first person to hear what we now call FM synthesis. He soon discovered that this was a powerful way to create new sounds and, in 1966, became the first person to compose and perform a piece of music (‘Sabelithe’) using FM as the sound generator.

Chowning continued to develop FM, adding functions that allowed him to control the evolution of the sounds he created and, in 1971, Mathews’ colleague John R Pierce suggested that he should create a range of conventional sounds such as organs and brass to demonstrate that it could provide the basis of a commercial product. Chowning did so, and persuaded Stanford’s Office of Technology Licensing to approach companies for him. How do you think Hammond and Wurlitzer now feel, knowing that they turned down FM? Pretty stupid, I imagine, because that’s what they did, as did the other American manufacturers that the university approached. So Stanford contacted the Californian office of a well-known Japanese manufacturer of motorbikes, powerboat engines and construction equipment. Yamaha duly despatched a young engineer named Ichimura who, after a brief evaluation, recommended that the company investigate Chowning’s system further. Consequently, the company negotiated a one-year licence that it believed would be sufficient to enable it to decide whether the technology was commercially viable. And so it was that, in 1973, Yamaha’s organ division began the development of a prototype FM monosynth.

Meanwhile, Chowning had been working on MUSIC 10 (yet another version, this time for the PDP10), but Stanford failed to see the value of this and, after a parting of the ways, Chowning moved to Europe to continue his research. This later proved to be a significant embarrassment to the university because, when Yamaha approached it to negotiate an exclusive commercial licence for FM, Chowning was no longer a member of the faculty. Happily, Stanford knew when to eat humble pie, and reinstated Chowning as a Research Associate at the Center for Computer Research and Musical Acoustics that he had helped found. Chowning then assigned the rights in FM to the university, which duly agreed a licence with Yamaha.

Chowning later received royalties on the sale of all of Yamaha’s FM synthesizers, and the university is rumoured to have collected a substantial income in fees. Whatever the precise figures may be, it’s perhaps no coincidence that CCRMA was later re-housed in its own, expensive, purpose-built facility.John Chowning (left) with Patte Wood and Jonathan Berger, 1984.John Chowning (left) with Patte Wood and Jonathan Berger, 1984.Photo: Photographer unknown

Universal Computing

Having helped found CCRMA more than two decades before, Chowning retired from it in 1996 and was awarded the seat of Emeritus Professor at Stanford University. He was suffering hearing problems and realised that he could no longer be an effective critical listener for his students. He had also fought and won what he has since described as “an exhausting battle” with Stanford regarding staffing levels, so he decided to allow the Center to pass to the next generation. It’s evident that he’s very proud of the job his successors have done, because CCRMA is now regarded as a prime example of the success of a multi-disciplinary teaching environment.

Interestingly, he admits that this occurred organically rather than as the result of a long-term plan. “At the start, I had access to Stanford’s AI lab, which included philosophers, engineers, computer scientists, mathematicians and linguists, all working in an incredibly rich environment. So when I needed engineers, when I needed computer science skills, when I needed knowledge about the auditory system and how the brain processes music, I found people who could guide me and teach me. Those who have followed me at CCRMA have perpetuated this, and alongside the likes of computer scientists and engineers they’ve added people such as a neuroscientist who is the modern equivalent of the medical scientist who answered my questions nearly 50 years ago. The students at CCRMA have access to this rich resource and they profit enormously.”

This model has now been adopted more widely within Stanford University. Twenty years ago, the degree course with the most students was history, but today it’s computer science. Consequently, departments such as classics and languages have been declining, so the university created new degrees designated CS-x, where the CS is computer science and each student decides what the ‘x’ is. As a result, you can now take courses in which computer science is combined with subjects such as classics, or medicine, or music or history. The CS element teaches the student how to deal with large amounts of data and how to apply processing skills within the traditional fields and, as a consequence, the other departments are beginning to flourish again, and are doing things that are new and surprising both to the faculty and to the students. Clearly, CCRMA’s legacy extends well beyond electronic music.

One of the highlights of the Festival was a  performance by John and Maureen Chowning of his work ‘Voices’.One of the highlights of the Festival was a performance by John and Maureen Chowning of his work ‘Voices’.Photo: Histeria

Coming Together

As well as being a notable inventor and researcher, John Chowning is primarily a composer, and the Vox Festival was concluded by an evening concert of four Chowning compositions — ‘Turenas’ (1972), ‘Stria’ (1977), ‘Phonê’ (1981) and ‘Voices v3’ (2011) — reproduced through a 4.1 quadraphonic sound system. I asked him whether he felt that these were relevant to a young audience that may have been more likely to know the music of Nick Cave than that of John Cage. “As you know”, he replied, “I’ve been very much interested in how the auditory system responds to stimuli, and what kinds of sounds light up the pleasure centres of the brain. I’m speaking of the surfaces of the sounds themselves, not the structures and compositional aspects in which we use them; what we tend to accept, and what the ear tends to reject. Of course, that’s partly a cultural thing, but electronic music — club music, dance music and the more traditional electronic art music — have to some extent come together in recent years, and I think that that’s healthy. So, when Maureen and I perform ‘Voices’, the surface allure of the piece is such that it’s accessible to people who are not familiar with this type of music. At CCRMA now, there’s no great division between what people are interested in, and the influences flow both ways. It’s a very interesting time for music, and the tools that we all use are now basically the same: Max/MSP, or Ableton Live, or any number of other platforms that are available. This means that there’s an interesting relationship between the various musical subcultures, which makes things very lively in the academic world, as you can imagine.”

The Vox Festival & The Avant Garde

As keynote speaker at the Vox Festival, John Chowning’s contribution was central to events. Ben Houge explained: “We organised three workshops for the morning, and John’s lecture and demonstration, ‘Composing From The Inside Out’ began the afternoon’s events. He talked about the history of computing as it relates to music, and described a lot of the areas of his research — not just conventional FM synthesis, but using FM for modelling formants, and things such as locating sounds in a virtual space. He also described the software that he had used to compose the piece ‘Voices’ for his wife, Maureen, and discussed the elaborate cueing and rehearsing system he had set up for this. While he was talking about it he said, ‘If there were a soprano in the audience we could demonstrate this,’ and then Maureen started singing. It was really cool!”

However, the Festival also provided a talking shop and showcase for many other cutting-edge developments in electronic music. “We’re always thinking about things like networked music, real-time sample manipulation, and on-the-fly score manipulation at the school,” explained Houge, “so I founded the ‘App Choir’Lori Forsyth performs as part of Berklee’s App Choir.Lori Forsyth performs as part of Berklee’s App Choir.Photo: Histeria ensemble last year to explore these ideas in an environment that also teaches students programming skills. At the festival, we premiered my piece based on Elisa Gabbert’s ‘Ornithological Blogpoem’, performing this using a choir of eight students reading from mobile devices. These devices were networked together, and the timings were controlled by a Web application that told each person when to sing the next phrase. I had a separate app that controlled the piece’s progression, and audience members could visit a Web page that allowed me to play back the singers’ voices on their phones. The original poem is a surreal account of chirping birds, and the apps on the audience’s devices cranked the performers’ voices higher and higher in pitch until eventually they filled the hall with a sound like chirping birds.”

Clearly the avant garde is alive and, if not exactly kicking, then flapping. Houge’s answer came from an unexpected direction. “I guess that much of my thinking is related to two areas: the history of electronic music and my video-game career. The early paradigm for computer-based games was to take a piece of music and loop it. Later on, if a scene changed, we would fade one sample out and replace it with another but, for most of my career, I’ve been asking myself, ‘What can we do that’s more interesting than that?’ Then I realised that there’s so much knowledge we can borrow from other areas of music, whether it’s traditional algorithmic music from the early computer music domain, or aleatoric music from the likes of Stockhausen, John Cage and Pierre Boulez.

“In video games, you have to make maximum use of your resources. You don’t want to take a whole CD of sound and load it into your computer’s memory because that’s going to be inefficient. So you devise new ways to get the most mileage out of sounds. For example, if you can take a sound and play it at different pitches or combine it with other layers you can generate music that lasts a lot longer and remains more interesting. So now think of an open-form composition such as Stockhausen’s Klavierstücke 11. This piece is a bunch of different cells, and you can play them in any order. But when you play any one of the cells for the third time, the piece ends. Stockhausen realised that this was invisible to the audience so, as far as they were concerned, he might as well have scored each performance as a different but conventionally linear piece of music. So he suggested that the piece should be performed multiple times in each concert programme, allowing people to appreciate the concept and the variations that occurred from performance to performance. It was a ridiculous request — although far from the most ridiculous he ever made! — but he had realised that he had developed a system and had no practical use for it. It was a solution looking for a problem. Much later, I realised that this is one solution to the problem in video games, where we need music that can generate different variations of itself in different contexts, and respond to different types of events in different ways. So I see Stockhausen’s avant garde work as a practical precursor to modern game composition, and no longer an abstract exercise.”

Oh, Valencia!

Why would a music college from the East Coast of the USA want to set up the home of its first masters programmes in a coastal city just across the water from Ibiza? I asked one of its senior academics, Professor Ben Houge, who explained:Ben Houge (right) with John Chowning.Ben Houge (right) with John Chowning.Photo: Histeria “Having sites in multiple countries is something that a number of universities are now exploring. For example, the last time I was in Shanghai, I visited the new NYU campus. Education is now a kind of a franchise; New York University has an official campus in Abu Dhabi as well as the one in China, and I guess that reflects the increasingly global nature of education. Some years ago, Berklee decided that it wanted to have a physical presence in Europe, and I think that Spain was chosen in part because of its language, which links it to Latin and South America. This is important because we have a lot of students that come from that part of the world. Perhaps more importantly, it’s a city with a rich culture and musical heritage, and it’s on the Mediterranean, which is a geographic and cultural crossroads for Europe, North Africa, the Middle East, and South America. The campus started with three programmes: contemporary performance, global entertainment and music business (GEMB), and scoring for film and TV and video games. A year later it added a fourth — music production technology and innovation (MPTI) — and, although most of my career has been involved with developing audio for video games, I was brought over from Boston to help get the MPTI programme off the ground.”Berklee’s Valencia campus is housed in a  futuristic building that is oddly reminiscent of the stormtroopers’ helmets from Star Wars.Berklee’s Valencia campus is housed in a futuristic building that is oddly reminiscent of the stormtroopers’ helmets from Star Wars.As you’d expect, the campus is equipped with state-of-the-art recording and control rooms.As you’d expect, the campus is equipped with state-of-the-art recording and control rooms.

John Chowning was clearly impressed with Berklee’s Valencia campus: “It’s in a huge arts centre, and it’s the most amazing place. The College has taken a sizeable part of a building that also includes a concert hall and large film-scoring spaces, and the facility itself is extremely well done; the sound rooms, the equipment and the isolation are all first-class. Its graduates should be quite at home walking into any recording or sound-design studio.”

Ben Houge concluded, “We see the campus as a hub to launch the careers of our most musically talented international students, and the Vox Festival highlighted lots of important parts of our college programme — traditional studio production, recording and mixing, an interest in electronic dance music, as well as the weird stuff that is my area of expertise, Max/MSP and things like that. But more than that, it was fun, a real passion project, and it was really cool to bring John back to Valencia, and have a good party. We’ve talked about repeating it as an annual event, and I already have some ideas for next year’s festival. If we do it, who knows what might happen?”