Inspiration

The fashion industry has been the same for hundreds of years. When we go to a store, we can try some of the items that are there, but what if we want to try something that isn't there. When we buy online clothing, we are always skeptical of whether we will like the actual product and whether it will match with our other clothing. Through our platform, we are working on bridging the gap between customers and fashion through the use of AI.

What it does

In the web app, you take an image of yourself, enter a description of any shirt or pair of pants that you're interested in, and it will show you a final picture with those clothes on you so you can see how you look in them. It will also provide you with links where you can purchase similar items and the choice to share the outfit with your friends.

How we built it

The frontend was built using basic React.JS, & we used a Python Flask app to handle the communication with the machine learning models involved with masking the image as well as the text-to-image stable diffusion model used to replace your current clothes with clothes you described. We used Google Cloud to deploy our web app and host our server.

Challenges we ran into

The masking technique we used to allow the model to replace the clothes was rather hard to setup. There was a lot of trial and error involved with finding the right parameters and adjusting the ML model iterations. In addition, the text-to-image model required a lot of fine tuning in order to enhance its accuracy. We conducted many experiments with the model's prompt, prompt strength, guidance scale, and inference steps.

Accomplishments that we're proud of

We are also proud of the unique user experience that we were able to develop. Although there are products on the market that provide virtual experiences with online shopping, our platform is unique in that it encompasses trying on different outfits simply by describing them, finding similar outfits on the market, and easily sharing outfits with friends.

What we learned

Going into this hackathon, most of us didn't know what stable diffusion was and how image masking works. By the end of the hackathon and after reading a lot of documentation, we were able to accomplish a lot with our models. From our first trials, we learned a lot about which parameters affect our accuracy the most which helped us fine tune until we got reliable results.

What's next for Fash Lean

We see a ton of potential for our product Fash Lean. We need some more fine tuning on our models in order to guarantee reliable results for all our customers. This also involves working on a better image masking model which will also enhance our customer's experience with our image generation model. In addition, part of our in-app experience is to allow shopping for similar outfits. Our goal is to partner with leading brands and e-commerce websites to monetize our product using affiliate links, thus keeping our platform free for our customers.

Built With

Share this project:

Updates