Inspiration

Current music suggestion services verbose and force the user to think in computer-centric categories. Other approaches require harvesting large amounts of data before attempting to become effective. I've decided on a new approach. I've always felt that often I don't even consciously realize what I want to listen to - so let's tap into our subconscious.

What it does

The user gets sets of images flashed in front of them. They are encouraged to press very quickly (F for right image, J for left). The "idea" is whichever picture they ".

The backend uses cosine similarity as algorithm to detect similarity between sets of preferences. This has a couple of advantages:

  1. Nonresponse can be easily modeled as 0 (-1 for left pic, 1 for right), allowing for time constrained image selection to still produce valid data.
  2. Scales well with variable amounts of pictures.
  3. Gives a rigorous, objective result to the question of "how close" two sets of preferences are.

Our database is built with each user. This means I do not have to curate or create much data; it is organically grown.

How I built it

I used Mongo, Node, and Express

Challenges I ran into

JavaScript is a great language!

Accomplishments that I'm proud of

Getting it to work

What I learned

JavaScript is a great language!

What's next for MusicallyMe

Much better UI/UX. Many pain points for users currently. Better optimized data structures in backend, currently using default Mongo settings.

https://github.com/jacsonding/MusicallyMe

Share this project:
×

Updates