Inspiration

We were inspired by the prospects of computer vision and being able to bridge that gap with the sounds of music. If we could connect our ideas together, we could convert vision into vibe.

What it does

We using hand gestures, we can take an image and send that to convert our image to a text description which is mapped to a select few genres using the OpenAI API. Then, the program sends those genres to Spotify API and get the best recommendations that Spotify has to offer us.

How we built it

Gesture-activated smart webcam captures your scene With the use of AI and LLMs, we describe the image in text form and convert this to be mapped using OpenAI's API. We then give a curated set of the tracks which the Spotify API finds, it is the perfect music match for your vibe

Challenges we ran into

We weren't able to train our model from scratch like we had planned and so we switched routes after this hurdle.

Accomplishments that we're proud of

The computer vision was a major hurdle we faced but we were able to get a gesture to take a picture when said gesture is made. We were also proud of how we leveraged the APIs used to intertwine and complete the

What we learned

We learned about computer vision and gestures. We then were learning about how to make our own CNN and uses datasets to make this model

What's next for Vybe Fynd3r

We hope to make this idea into a wearable device which uses it's own model which is train to convert images-to-genre descriptions and continue improving it.

Built With

Share this project:

Updates