Inspiration

We wanted to revolutionize the way people shop clothing online without having to be at the store. Many consumers have ordered through online retailers but found out that the sizes and visual appearances are drastically different than the pictures showcasing the item. By using computer vision technology, we will allow people to virtually test out how the clothes would actually look on their bodies.

What it does

Our website allows the user to visualize how various items of clothing would look on them, including necklaces, watches, and t-shirts. Our custom computer vision program uses facial recognition to position the items of clothing on the user's body by the use of Haar cascade object detection. By live-streaming the results of the clothing on the user's body with minimal lag time, the user can visualize the flow of the clothing as they move around.

How we built it

We utilized openCV to estimate where key-points of the body such as joints and the waist are located. Originally we used the library OpenPose to generate the locations of these joints through their pose estimation, however we found that it is near impossible to live stream the results with lag time. Following this, we projected our clothing data over the user using the user's face (Haar Cascade) as an anchor for the rest of the display. The clothing data is either a 2D portrait or a 2D slice of a 3D object, which allows for further flexibility in scale.

Challenges we ran into

We had to work with many facets of computer vision, the most difficult of which was pose estimation. After calculating the angle that the user is facing and the locations of their joints, we had to use their relative positions to project a supposedly three-dimensional t-shirt onto a three-dimensional person, all in a 2D environment. Adding to this, without the use of external sensors such as a Kinect, projecting human body parts recognition systems are still far from real-time.

Accomplishments that we're proud of

We're proud of how we were able to bring an idea to life that has never been attempted before, despite having never worked with computer vision.

What we learned

We learned a lot about computer vision, the OpenCV API, and how projections work mathematically. We also learned how we could visualize our work via web development through Flask.

What's next for Virtual Outfitter

Our next goal is to attach a QR code to a t-shirt's tag in a store or put a QR code online that the user could scan and then see how the t-shirt would look on them.

Built With

Share this project:
×

Updates