You are here

PAUL HODGSON: Music Is All In The Mind

Sounding Off
Published October 1998

PAUL HODGSON: Music Is All In The Mind

It's all in the mind... music, that is. A computer can't begin to make music without a human being... or can it? Paul Hodgson explains the current way of thinking at the cutting edge of research into music cognition.

You may have seen Courtney Pine playing with a jazz improvisation program on Tomorrow's World recently. The program played fluid jazz lines in the style of Charlie Parker, over a tune that was composed 10 years after Parker died, and was described by Pine as 'bloody brilliant' and 'the best he had ever heard'.

Does this mean, then, that computers will make musicians redundant? Will we, in future, press a button to instantaneously generate a new composition in a particular style and sit back and listen? Will we dispense with people and improvise with machines? Will machines be creative? Well, I would like to answer these questions because I wrote the jazz program and I would like to explain why.

The main motivation for writing the program was a lifelong fascination with the way in which some musicians are able to improvise music without recourse to any notation. Before the development of written musical representation, music was learnt and assimilated by ear without any intermediate stage between hearing the music and playing it. Some musicians have a greater or lesser ability to do this, which raises the question of why this is so, and why some people have an incredible natural facility for improvising music which is way beyond that of normal ability. The main question for anyone interested in how the mind works is: how do they do it?

If we are to understand how the mind works we have three options. We can cut brains up and do neuroscience, we can observe behaviour and do experimental psychology, and we can build models in computers, as I have tried to do. If a computer model does yield some understanding of how the mind works, we can then explore possible applications, to see if it can be employed in ways that will benefit musicians. Is it possible to use the model to hold different cognitive representations of musical structure according to the needs of the user, whether that be learning how to play an instrument or as an ideas generator for a new composition or for live performance?

My program is based upon an approach to music cognition that assumes that melodic structure is learnt in chunks, rather than at the note level. In other words, the idea is that when we hear a melody we like, we segment the tune into selective chunks based upon our own personal preferences and experience. When we learn a new musical language such as Be‑bop, we select phrases that we particularly like and reinforce the pathways to these phrases. The program is based upon the idea of preferential selection of specific chunks that are used in the construction of new phrases. The difficult part is the selection of the phrases and the way in which they are recombined to create new material. Brilliant musicians do this in a very organic way that unfolds into a beautifully balanced solo.

Many people would argue that music is something that cannot be reduced to simple phrases, and they would most certainly be right. Charlie Parker believed that music could not be separated from life itself: "Music is your own experience, your own thoughts, your wisdom. If you don't live it, it won't come out of your horn. They teach you there's a boundary line to music. But, man, there's no boundary line to art."

We cannot model the complexity of human experience, however. Current thinking in Artificial Intelligence research is moving towards a view that suggests that intelligence can only be understood in terms of a complete organism in its environment. It is argued that the idea of abstracting out music from a person is far too reductionist and simplistic, in view of the overall complexity of a person's being in the world. What is needed, it is said, is a much more holistic approach that tries to understand the complex web of interaction between the person and their world. In fact, some people would go even further and argue that it's simply not possible to 'understand' the world; it's much better to meditate and just 'be' with the world.

It is certainly true that a relatively simple computer simulation has nothing like the complexity of a human being, let alone a genius musician. It has no imagination or consciousness, no spirituality, no feelings for itself or others — and, more importantly, it has no ability to develop any of these attributes. It is merely a set of rules for acting on a set of data. What's more, it cannot modify itself of its own volition and adapt to a new environment. If we tried to program it to attempt to do this the program would be so complex that it would never respond in time. If we used a more biological approach to try to overcome some of these problems we would very quickly reach a complexity threshold above which even if we could produce a good model, we would not understand how it worked. My program does, however, capture something of the music of Charlie Parker. On some level, therefore, it can help us understand how the mind carries out musical improvisation.

Beyond gaining theoretical insight into the workings of the human mind, though, does a computer model of human musical creativity have any other uses? Does the possibility of such a model in a computer offer anything to people interested in the use and development of technology for making music?

Given that musicians have always been limited/enabled by the current level of technology, be it simple drums, nose flutes or virtual synthesizers, it means that the model can be used to generate new material that might not have been thought of, in ways that might not have been considered. Future multimedia will increasingly develop cognitive representations that allow the user to take advantage of variable levels of cognitive ability that the program can develop in conjunction with the user. This does not mean a push‑button world where a 'creative machine' will do everything and we become progressively more obsolete. It means using technology as a high‑level interface into exploring and developing new ways of learning and creating.

Paul Hodgson is a jazz musician and programmer and is currently working on new interfaces for making/learning music using modern technology. He can be contacted via SOS.

If you'd like to air your views in this column, please send your ideas to: Sounding Off, Sound On Sound, Media House, Trafalgar Way, Bar Hill, Cambridge CB3 8SQ, UK. Any comments on the contents of previous columns are also welcome, and should be sent to the Editor at the same address.

Email: sos.feedback@soundonsound.com