"The hills are alive with the sound of music, with songs they have sung for a thousand years.” —“The Sound of Music,” sung by Maria
Inspiration
While doing our research, we came across this youtube video of someone using plants' bioelectrical signals to make some music. One of our team members is really into gardening and many others are into music, so this idea immediately stood out to us. But of course, we couldn't just copy the exact same thing (Not that we could even if we wanted to. We don't have the hardware capabilities). Luckily, just like us humans, plants speak many languages too. And just maybe if we try hard enough, we can understand them.
What it does
Our app interprets one of the many languages that plants speak, and we do this by translating their leaves' vein structure into music. The user simply takes a photo of a leaf, and can hear the music generated from its vein structure.
How we built it
We discovered a library that extracts graph data from a leaf's vein structure. Unfortunately, it was outdated and did not work for us but it was enough to push us into the right direction. We converted the vein structure into a graph and then traversed over it to generate music, assigning different instruments to the edges. We used Python for the backend (graph interpretation and generation), React native for the frontend and well, our main hero, the leaves, for the music generation.
Detailed Explanation
When an image is input, GrabCut first cuts the leaf out from its background. The image is then converted to grayscale and Canny edge detection highlights all edges, including both the leaf boundary and the veins inside it. GrabCut runs a second time to isolate just the leaf boundary, which is then subtracted from the Canny edges to leave only the internal vein network. A region growing pass fills gaps in that network, and a Kruskal-style algorithm connects any disconnected fragments into one single connected graph. The vein network is then skeletonised down to 1 pixel wide, and finally sknw converts the skeleton into a NetworkX graph where branching points are nodes and vein segments are edges. That graph is then used for music generation.
Challenges we ran into
Backend: The library that we found online to generate a graph from the vein structure did not work. So we had to create our own library during the hackathon. The second biggest issue we ran into was making the music sound good. While the randomness is what makes it beautiful, we did not want to torture our listeners. Quantization of the notes helped, and so did choosing better instruments and scales. But we still kept the music as authentic as possible and not change what the plant is saying to us. The plants have a different language to ours and sometimes the music may sound unknown to us but it is important that we respect their language.
"We don't make mistakes, just happy little accidents". —Bob Ross
Frontend: We wanted to make the UI as calming as possible to go along with the app. So we went with a hand-drawn artstyle. This meant that we had to draw all our assets. Luckily, we had a wonderful artist in our team and so everything went fine.
Accomplishments that we're proud of
Everything. No seriously, all of it. This idea was exciting to us and we are all glad we have a working final product.
“This song is new to me, but I am honored to be a part of it.” —Solanum, from the game Outer Wilds
What we learned
We learned the technical skills; the different tech stacks we knew and the ones that were unknown to us before this. We learned how to work in a team, manage a project, deal with conflicts and share ideas. And alot else that I cannot put into words. Or maybe I'm just getting tired of writing this. Anyways, we had fun.
What's next for GreenMusic
Our app isn't scalable at the moment, so we would try to make it more scalable. Our code isn't very clean (which is understandable. 48 hours isn't alot) and that causes optimization issues. Lastly, we can try to work with different graph and music generation algorithms to make it sound more beautiful.
"I go to the hills when my heart is lonely I know I will hear what I've heard before My heart will be blessed with the sound of music And I'll sing once more" —“The Sound of Music,” sung by Maria
Built With
- chatgpt
- claude
- copilot
- expo.io
- github
- networkx
- node.js
- opencv
- python
- react-native
- typescript
- vscode
Log in or sign up for Devpost to join the conversation.