We are very used to computers running fixed programs such as DAWs and plug-ins to accomplish specific recording and processing tasks, but in the wider world we’re seeing the rise of ‘Machine Learning’. This is the advanced pattern-recognition technology that underpins facial recognition, junk mail detection, credit card fraud detection, speech recognition, predicting customer interests and, of course, navigating self-driving cars — which I’ll only be convinced by when they can safely tow a caravan around the narrow lanes of Cornwall while simultaneously avoiding oncoming tractors and pieces of rock sticking out of the hedge. There are also projects to create virtual doctors capable of accurate diagnosis based on learning from data provided by past case histories.
Where Machine Learning differs from conventional computing is that the software is designed with the ability to modify its own behaviour based on the analysis of large amounts of data that then enables it to predict future events with greater accuracy. There are already some music programs and products that dangle a toe into the Machine Learning world, not least Jam Origin’s MIDI Guitar software, though translating guitar playing into reliable MIDI data could get even better if the device or software package could update its approach based on how you play each time you use it. That would be the MIDI guitar equivalent of speech recognition systems that you have to ‘teach’ by reading phrases into them. I suspect that many of the high-end audio restoration product designers are already exploring Machine Learning, but where else might the technology take us in the future?
So far it seems that Machine Learning is most effective when confined to a fairly narrow and specialised range of tasks, so I don’t think we’re looking at a Skynet-style machine take-over as long as we remain in control of the mains plug, but I can see further potential applications in music production. For example, how about a machine that has learned the playing style of a specific musician or composer with a view to helping you nudge your own performances in the same direction? After digesting as many examples as possible from the target musician or composer, the machine would then need to learn how you play — and the more you play into it, the more it would learn about your musical approach.
I can imagine a scenario where you might record some fairly basic chords and perhaps a one-line melody, then the computer sets about re-interpreting it in the way that it thinks Mozart might have done it. Or, you record a blues guitar solo, then the computer modifies your timing, bends and vibrato to match what Gary Moore might have done. Further in the future we might even see vocal processors that can transform your own vocal to capture both the overall sound and technique of existing singers — though we may have to wait some time before we can feed in Bob Dylan and get out a credible Adele!