Inspiration
We align with ABRSM's goal to make music more accessible for learners and minorities across the world. Examiners are often overwhelmed and students are often unguided in self-practice, so we want to develop a tool that gives feedback, grounded in objective analyses, that's similar to feedback one would get from ABRSM examiners, in a way that's easy for users to check and understand.
What it does
It analyses the raw data of the audio and extracts metadata of the song and the individual notes, feeds that data to our models and gets a score and a review comment.
How we built it
We used primarily Python for its wealth of libraries (librosa, music21, etc.) and run these analyses. Metadata is sent to a regression model that is trained on our dataset of 300+ scores to accurately predict the score and then this score, and the more detailed note-by-note analysis is sent to a trained LLM model to get the ABRSM-style feedback
Challenges we ran into
We don't have much experience with audio tools and we had to learn all those, how to analyse raw audio data and then how to train models with that data all in the space of 24 hours
Accomplishments that we're proud of
We are proud that we made a functional prototype that implements the bulk of our ideas, and that we have made great steps towards democratizing musical learning across the globe
What we learned
How to work with novel audio tools and what models work best with different types of data
What's next for Trainstrike
Making our models more robust, improving the user experience, developing less computationally-intensive models that users can easily run on their personal computers

Log in or sign up for Devpost to join the conversation.