Every member of this team is not only a computer scientist, but we are all musicians as well. We have a desire to push inivation, and we saw an opprtunity to create something truly useful in a field we are all passionate about.

What it does

M-Net is meant to analyze a picture of a piece of sheet music, detect each note and rest on the page, and generate a midi file to play back the written music.

How I built it

Utilizing tensorflow and tflearn, we implemented a version of a fast R-CNN for object detection, and a CNN for not value determiniation. The information from the note value determination was then passed to Mingus (a python midi library) where the midi file itself was generated

Challenges I ran into

We were unable to autmoate the data distribution between each compenent and thus had to pass it manually We attempted a second CNN for not duration but ultimately did not have enough training data. This prompted an update to the fast R-CNN

Accomplishments that I'm proud of

We are able to generate sufficient training data and implement a fast R-CNN in less than 24 hours and with little to no experience with machine learning

What I learned

I learned how frustrating not having enough training data can be We learned how to take a python module written for a previous version and update it to be compatible with the current version of python

What's next for M-Net

We hope to be able to fix the automatin issues with eventual upgrade for hadnwritten music. Imagine being a musician, having an idea, and being able to scribble it on some paper and hear it within seconds. That's where we hope to take M-Net

Built With

Share this project: