1. 13

Also has nice definition of what recurrent NN’s are.

  1.  

  2. 3

    This is a nicely written post. The main thing I’m left wondering is: do RNNs, as a specific method, buy you anything here? The basic generative setup he’s using is sequential, note-by-note (or note-group by note-group), data-driven probabilistic music generation. The main choice in such a setup is what method you use to produce the probabilistic model. A simple and common way to do it is with Markov chains. Here RNNs are used instead. But the end results sound, to my ears, a lot like a typical Markov generator, despite the method being far more complex.

    In some domains RNN-based generators seem to reproduce structure that Markov-chain generators don’t. But here I’m not seeing it so far.