We started our journey when we try to purchase some clothes in a nearby apparel shop during this COVID-19 pandemic. We realized that people count is significantly low, after some research we found that people are reluctant to go to shops and try on clothes, due to the fear of getting Covid-19. Then we thought of developing an android or mobile application which helps to wear clothes without wearing them physically. We searched on YouTube and Google for similar implementation. We found exactly 2 implementations.
- Virtual Try-On mirror uses the Microsoft Kinect sensor. Customers need to present in front of the display and cloths can be fitted and also change the cloth using hand gesture control. The problems with this system are the fitting won't work perfectly, very high hardware cost, the customer has to wait for their turn, difficult to implement.
- Virtual Try-On mobile application with simple drag and drop. The problem with this one is that the fitting becomes a disaster and also the UI is not user friendly.
What it does
In simple words, it's a RESTful API deployed as a Flask application. Where you can upload your full body image. After that, you can upload different upper human body clothes such as T-shirts, shirts, etc, and see how well it fit in your body.
How we built it
- We first collected the dataset for the Virtual Try-On(http://22.214.171.124:9999/overview.php).
- The image includes different full-body images, their parsed images, and key_points.json(OpenPose).
- We trained our model by dividing the dataset into different categories and do some data processing. 4.We trained our network using PyTorch and saved the weights.
- Combined GMM(Geometric Matching Module, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face.
- Implemented on Google Colab in Flask RESTful API.
- Customers can upload the poses and the Openpose estimate the pose and write a JSON file along with applying a instance segmentation for parse image and save the output to separate folders. After that customer can upload any upper cloth image and the program first remove the background, save the output to separate folders and then we apply our PyTorch pre-trained model to get the result with detailed warped cloth on the customer image.
- In the output, we get two images, one with a fake face or generated face and one with the original face
- To understand the pipeline we put all the intermediate steps in the output image.
- It is now deployed on AWS EC2(p2.xlarge) VM-GPU powered instance.
- We also built an android wireframe on Adobe XD, we are in process of building the app using the RESTful API.
Challenges we ran into
1.Combining GMM, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face is a very hard task since it needs a lot of optimization in the code.
- Deploying as a flask application along with OpenPose implementation needs a lot of effort. Since we need to build the OpenPose using Caffe and need to satisfy a lot of GPU driver requirements such as CUDA and cuDNN.
- Deploying on AWS EC2(p2.xlarge) taken more time than we expected due to protobuf conflicts in OpenPose Installation(CMakelists.txt bash file changes and also library installation issues.).
- Implementing instance segmentation algorithm also taken a lot of time, because we need to include specific cloth segementation on human body.
Accomplishments that we're proud of
- Developing worlds first detailed Virtual Try-On cloths powered by PyTorch.
- Implemented a web app for user to interact.
- Built an android wireframe and in the process of completing it.
What we learned
- we learned to integrate different PyTorch models in to a single program
- Understood the importance of dynamic computational graph of PyTorch in our project
What's next for Deep Virtual Try-On cloths powered by PyTorch
- An android application where user can just take the photo of them and apply the Virtual clothing in real-time.
- Training the model on lower human body clothes like pants, shorts, etc, and images of men.