Musical instruments that play along: ML for music generation, interaction, and performance



Predictive Interactions between a performer, instrument and sound.


The concept of using AI/ML models that can generate music and sound has expanded dramatically in recent years. However, despite media attention towards musical AI research, music involving AI is rarely heard in concerts apart from a few special research events. This is partly due to a lack of musical ML systems directed towards music performers.
In this project, you will help to change this by developing a new ML model that can interact with a human in live performance. This model could connect directly into existing music technology components such as digital audio workstation software (DAWs) or be a self-contained computer musical instrument, touchscreen app, or custom sensor-based device for musical expression.


  • Completed coursework or experience in machine learning, AI, or data science. Knowledge/experience in deep learning would be a plus.
  • Experience with Python.
  • Strong interest in music, performance, and creativity.

Background Literature


music, performance, creativity, machine learning, artificial intelligence

Updated:  10 August 2021/Responsible Officer:  Head of School/Page Contact:  CECS Webmaster