Google as part of their customary April fool's prank made a device that would tell a person about their style! We actually liked the idea of detecting style, but to make people cool!
What it does
So, it is a mirror that is capable to project on the surface. We can use this to virtually display images on the mirror. We are using this virtually project your clothes on your body.
The mirror knows all the clothes you have in your wardrobe, Be it T-Shirts, Shirts, Jackets or Lowers. Know it uses our custom algorithm to suggest an outfit for the day!
The mirror is capable of matching different colors. Our algorithm is able to distinguish which colors look good together and which don't. Also, it has added advantage of determining the current temperature and weather conditions to recommend the ideal type of clothes to be worn.
How I built it
We started by making the hardware of the mirror. We took an old LCD Monitor and mounted a two-way glass on top of it. It enables the viewer to view his reflection along with partial image from the back.
Then we started the software implementation. Our software uses Google Cloud Vision API to detect the "Upper Body" & "Lower Body" and gives us the coordinates for the same. We use these coordinates to mask images of the clothes recommended by our algorithm on top of the viewer's body. The coordinates from Google Cloud Vision API are pased to Unity which enables optimal placement of the image on the body.
Then we started the implementation of our algorithm which suggests clothes from the wardrobe. Currently, the algorithm uses two methods to suggest a combination:
Color Matching - We match color combinations based on defined presets based upon the data from EffortlessGent.com
Weather Prediction - We use openweathermap api to detect the predicted temperature. If it increases a threshold, thinner clothes are suggested
Finally, We implemented basic clothes recommendation and transaction system where the mirror suggests the user, which clothes to buy whose transactions are verified by CapitalOne's Purchase API
Challenges I ran into
Recognizing and segregating the human body into the upper and lower half to impose two different images was a major challenge. Google Cloud Vision API helped a lot for the same. Integrating the same with OpenCV and Unity for Real-Time Detection has also been one of the challenges.
Deciding what colors look good with each other and what combinations can be used has also been a significant challenge. Finally, we went forward with one of the most widely accepted pattern from EffortlessGent
What's next for TRYOUR - 'A Mirror Which Suggests You Styles'
We had a lot of ideas in mind to better our algorithm's efficiency but due to time constraints, we were not able to pull them off!
In Future, We can use Pinterest and Tag-Walk to scrape the latest designs and trends available in the market and suggest something similar to that
Also, TRYOUR can be developed into a complete platform where the mirror will suggest clothes that a user can buy to enrich the experience and be up to date with current fashion trends. With a single gesture, user can place order for the clothes which can automatically be updated in their digital wardrobe