Using a sequence-to-sequence LSTM neural network to infer and generate pulse using raw audio of solo jazz drumset recordings. This end to end model converts recordings into MIDI representation using a pre-trained RNN to do automatic drum transcription. This data is fed into an sequence-to-sequence LSTM network to create a real-time generative rhythm device.
By analyzing audio recordings of a particular jazz drummer, this LSTM RNN takes as input beats as inferred by a dynamic beat tracking algorithm to learn a representation of a particular drummer's performance style.
This experimental study looks at the way in which participants were able to determine the presence of an isochronous beat (rhythmic synchrony or congruence) using a coupled-oscillator model to generate competing auditory events. This mathematical model, Kuramoto oscillators, could be parameterized by varying the phase coherence and observing how well the participants were able to infer synchrony from a sequence of auditory stimuli.