Musinify is built with the aim of bringing people together from different parts of the world on a common platform where they can have fun, connect with people, and compose music at the same time. As we all know, the Covid-19 had a drastic effect on all of the music industries. At this very moment, it is hard to compose songs with your team.
What it does
On our platform, the musicians will be able to get together and choose from the given set of instruments and will be able to compose songs together on a video call platform using gesture controls providing a completely new and mind-blowing experience. This will solve the problem of professional music composers and also will give a platform to struggling musicians to showcase our talent and share their work on platforms like Spotify, I-tunes, and Amazon Music.
How I built it
The major component of our software is the video streaming feature which is built with the Agora. Our main implementation is based on the gesture-controlled feature using which you can play the instruments just by the actual gestures that we use for playing particular instruments that are done very efficiently using OpenCV. There is also an option to change the background to have an effect of a real band which is done using the deeplab_v3 segmentation model. The entire frontend is built in HTML/CSS and all the algorithms and the AI implementations are done in Python
Challenges I ran into
1) Improving video quality of OpenCV/webcam. 2) Implementation of the virtual background system 3) Integration of frontend to agora.io engine 4) AI-based autotune system
Accomplishments that I'm proud of
We were able to integrate all the models with our front end to build this application and also the video calling functionality. All our teammates learned a lot of new things from this project which we had never built till now and sitting the whole night discussing how to make things work was actually a big task especially when we were working remotely and is also an accomplishment that our entire team is proud of.
What I learned
1) Computer vision and Artificial Intelligence and Deep Learning algorithms like human segmentation 2) Audio Signal Processing and Tuning for the efficient transfer of sound. 3)Learned how to implement Camera X in our application to get the frames.
What's next for Musinify
Right now our platform works for a one-to-one call system. We plan to extend it to the Group of people as well. We're planning to add more instruments like flute, saxophone, guitar, and violin. For the beginners, we will be providing courses through which they can learn these instruments and start their journey of a musician.