AlphaGo versus computer playback of music
We all know that recently a computer beat one of the best human Go players. Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines is an interesting read about how this might change our economy. This post is not about the economy and mathematics, we have mathbabe who does a way better job than I ever could for this.
No, I would like to raise a question that came up during the reading of that article is:
Can we apply machine learning techniques to computer playback of digital music scores?
After all, I think one of the reasons why chess and Go are so susceptible to techniques from artificial intelligence is that there is a very rigid system behind it, which humans approach not by brute-force, but by pattern recognition. It is precisely because AlphaGo tries to mimic this, that it was so successful.
I wonder whether playing music might be of a similar nature: there is a very rigid system for music notation (just like the rules of chess), and humans are very good at turning this rigidity into something more pleasing than your average MIDI playback. What would happen if machine learning techniques were applied to digitised scores and recordings of classical music (who would also need to be aligned, another non-trivial problem)?
Would it become possible to write a new piece of music (let's say my favourite musical form: a fugue), and let a computer generate a playback for this in the way Glenn Gould would play it? Including humming?
Computer composition is a different question altogether, one that I didn't want to address here.