One common passion our group shares in common is music. Each of the members in our team enjoy playing an instrument during our leisure time. We discussed how other than the time we get with our music instructors, it is difficult for us to know how we are playing when we are practicing. We can't tell if we're playing notes off key, playing off beat, or simply put we just can't tell if we're practicing the wrong habits. Through building an application that can provide feedback to the user regarding their singing we learned a variety of skills. We learned how to integrate machine learning libraries in web applications and we also learned how to use django to connect backend with frontend. Overall we very much enjoyed this experience where we were able to learn to build something that our team as a whole is very passionate about.

What it Does

Users can choose any song on youtube and sing along with the background music. Feedback regarding at which timestamps the user was off key and by what pitch will be provided.

What We Used

The framework used to design the website is React. We integrated a YouTube API from Google Cloud Platform to choose youtube videos on our website. We then create an audio recorder for users to input their singing on their favourite music pieces.

To analyze the selected song audio data, we started by converting the youtube videos into mp3 audio format. We then split the music using machine learning libraries in python for feature extraction to separate vocals from the instrumentals.

To connect the data analytics with the front-end, django was used to establish the connections between Python and React. Hence synchronizing the backend analytics and the front-end development. Django was also important because it allowed Us to integrate Python code to web-development which enabled us to use Python’s powerful machine learning libraries to further our analysis.

To analyze the user’s voice recording, we provided a built-in feature to allow the user to record on the web application with the background music of their chosen song. The next machine learning algorithm predicts the frequency values for the vocals of both the artist and the user. The frequency values are converted to pitch notes and the difference between the pitch is examined. This analysis of pitch and frequency gives a rating as to whether at a specific time the user was off pitch or not.

Finally, a comparison of the pitch notes of both the artist’s vocals and the user’s vocals is provided back to the user in the form of musical notes where the user can see their shortcomings and improve on them.

Share this project: