Inspiration & pitch
Consider the following scenario: you buy new clothes, and you cut away the laundry labels (typically they are located at the neck or close to the waist and make you feel uncomfortable.) You have a vague feeling that you will need to remember how to wash the clothes, but looking at the symbols you realize that you understand absolutely nothing.
Discouraged, you just throw the labels away, and risk damaging your clothes next time you do laundry.
With wshhh, your entire wardrobe (and the associated laundry labels) are easy to navigate and understand! It's as easy as 1-2:
- Take a pic of the clothing itself
- Take a pic of the label
Our computer vision algorithm takes care of extracting the relevant laundry information from the label, and makes it available through a convenient interface.
How we built it
On the frontend, we built a HTML5 app that uses the WebRTC API to access the device's camera (whether a smartphone or a computer). The interface is built using the Materialize library, which ensures that the app scales well to diverse screen sizes and has a modern look and feel. In the backend, we process and analyze the pictures using OpenCV. The pipeline includes histogram equalization, template matching with normalized cross-correlation and image alignment with SIFT features. We match common symbols, and store the results in a database.
Challenges we ran into
Matching symbols was harder than we expected - laundry labels are very diverse and usually very small (it is therefore difficult to get good pictures of them), and there is no standard computer vision solution for this task.
Accomplishments that we're proud of
We managed to build a working proof of concept!
What's next for wshhh
Features we'd like to implement:
- improving the laundry symbol matching pipeline
- filter your clothes by label and colors (e.g. find all dark clothes that need to be washed with the delicate program).
- connect nearby users that share common laundry patterns ;-)
Code is open source :-)
Check it out on GitHub.