Inspiration

There are so many times I am scrolling down a website or watching a video and I see a cute dress or a skirt I like. Later, either I forget about it or tediously have to look for similar things on different websites. What if that could be made easier?

What it does

When you see something you like, just click on the chrome extension and then select that region of the screen. If the area is well-defined, it will try to give you similar suggestions from different sources.

How I built it

The chrome extension captures a screenshot of the area of selection. This screenshot gets stored on a file server and the location is sent to a Python API. The API first detects for human figures using image processing. Then it does a contour detection and removes background as best as possible. Finally, the image is sent to the Google Vision API and label annotations and image properties are extracted like color and type of dress etc. For color, there is a code to find closest match from RGB values of dominant colors which is added in addition to properties returned by Google. All of these are used to create a query string and passed to Google custom search API. The top 4 search results are parsed and their context url are opened.

Challenges I ran into

The challenges were to get the proper screen capture to begin with and then to do the image preprocessing.

Accomplishments that I'm proud of

This is my first experience is using most of the technologies- building extensions, developing Python API server and using Google APIs.

What I learned

I learnt a lot about building an end-to-end solution from scratch.

What's next for SnapnShop

Improvise the similarity match and the search results it produces like using a feature extractor only for clothing and extract relevant features from the images- like patterns and shades and cuts.

Share this project:

Updates