Challenge: ABRSM - Generating Feedback on Music Performances

  1. The Problem: The "Feedback Gap" in Music Education For millions of students worldwide, ABRSM exams represent a significant milestone. The feedback from a graded exam is invaluable, but it's a single snapshot in time. Between lessons and exams, students practice for hours, often without specific, expert guidance on their performance nuances.

This creates a "Feedback Gap":

Students aren't always sure if they are correcting mistakes effectively or interpreting the music as intended.

Teachers have limited time to address every technical detail during a lesson.

Examiners provide expert assessment, but this feedback isn't available on-demand during a student's daily practice.

The Core Question: How can we provide every music student with access to scalable, consistent, and ABRSM-aligned feedback, anytime they practice?

  1. Our Solution: Aura AI Aura AI is a proof-of-concept tool that acts as a virtual music examiner. It analyzes an audio recording of a performance against its digital score to provide instant, detailed, and constructive feedback in the trusted language of an ABRSM examiner.

It bridges the gap between raw practice and expert assessment.

Value Proposition:

For Students: Empowers effective practice with immediate, objective feedback on pitch, rhythm, and dynamics.

For Teachers: Provides data-driven insights into student progress, allowing lesson time to be focused on musicality and interpretation.

For ABRSM: Presents a scalable, AI-powered tool to support examiners and enhance the learning ecosystem for its global community.

  1. How It Works: The Technology Pipeline Aura AI uses a sophisticated Music Information Retrieval (MIR) pipeline to transform an audio file into a structured feedback report.

Input: The user provides two files:

An audio recording of their performance (.wav, .mp3).

The corresponding musical score in a digital format (.musicxml).

Audio-to-Score Alignment: This is our core technical innovation. We use Dynamic Time Warping (DTW) to create a precise mapping between the performed audio events and the notes in the MusicXML score. Every single note played is synchronized with its written counterpart.

Performance Feature Extraction: Once aligned, we conduct a multi-faceted analysis of the audio:

Pitch Analysis: We extract the fundamental frequency (f 0 ​ ) of each note using a pitch detection algorithm and compare it to the score's pitch.

Rhythm Analysis: We analyze the onset (start time) and duration of each performed note, comparing it to the score's rhythmic values to detect rushing, dragging, or incorrect note lengths.

Dynamics Analysis: We calculate the Root Mean Square (RMS) energy of the audio signal to quantify loudness. This allows us to check if dynamic markings (p, f, cresc, dim) were followed.

LLM-Powered Feedback Generation: The raw analytical data (e.g., "Pitch accuracy: 92%", "Measures 5-8 tempo deviation: +8%", "Dynamic contrast ratio: 1.5:1") is fed into a Large Language Model (LLM). We use a carefully engineered prompt that instructs the model to act as an ABRSM examiner, translating the quantitative data into qualitative, constructive feedback.

  1. Addressing the Judging Criteria Creativity & Innovation (25%) Aura AI's innovation lies in the synthesis of high-precision performance analysis with domain-specific language generation. While note-checkers exist, our tool is unique because it:

Analyzes Musicality: It goes beyond right/wrong notes to evaluate expressive elements like dynamics and tempo, which are central to ABRSM's criteria.

Speaks the Right Language: The LLM generates feedback that mirrors the tone, terminology, and constructive nature of a real ABRSM report, making it immediately useful.

Creates a Holistic Tool: It bridges the gap between objective signal processing and subjective, human-centric musical assessment.

Technical Execution Our proof-of-concept is built on a solid, functional Python foundation, using industry-standard libraries.

Reliable & Robust: We use librosa for audio processing, partitura for advanced score alignment, and the transformers library for our LLM.

End-to-End Functionality: The provided code successfully demonstrates the entire pipeline: loading data, performing analysis (simulated for speed), and generating a high-quality feedback report.

Well-Structured Code: The script is modular, well-commented, and clearly demonstrates the technical viability of our approach.

Impact & Usefulness Aura AI directly addresses a real-world need in music education with massive potential for impact.

Scalability: The automated nature of the tool means it can provide feedback to thousands of students simultaneously, something human examiners cannot do.

Accessibility: It democratizes access to expert feedback. Any student with a device to record themselves can benefit.

Practical Value: It helps students prepare more effectively for exams, leading to better outcomes and a more rewarding learning journey. For ABRSM, it's a prototype for a tool that could support examiners and standardize preliminary technical assessments.

Presentation Our concept is communicated clearly and effectively. Our live demo will showcase:

The Use Case: We'll upload an audio file of a student playing a Grade 1 piano piece and its score.

The Analysis: We'll show the raw analytical output from our script.

The "Aura" Report: We will present the final, LLM-generated feedback report, highlighting how the technical data was transformed into a professional, easy-to-understand assessment.

  1. Future Roadmap This hackathon prototype is the first step.

Real-Time Feedback: Evolve the tool into a web app that provides feedback as the student plays.

Multi-Instrument Support: Expand the analysis model to support a wider range of instruments, including strings and woodwinds.

Teacher/Examiner Dashboard: Create a portal for educators to track student progress and for examiners to review AI-assisted pre-assessments.

Thank you.

Built With

  • ai
  • audio
  • face
  • hugging
  • librosa
  • partitura
  • python
  • transformers
Share this project:

Updates