Inspiration

The fast fashion industry produces over 100 billion garments a year, and most of them end up in a landfill within 12 months. We're surrounded by a system designed to make us buy more than we need, trend cycles that reset every few weeks, influencer hauls, and "add to cart" friction so low that most people don't even think twice. The result is closets stuffed with clothes people barely wear and a planet paying the price. We built WearWise because the antidote to overconsumption isn't telling people to buy less, it's giving them the self-awareness to realize they already have enough.

What it does

WearWise lets you scan any clothing item using your phone's camera and instantly understand its real value to your wardrobe. It detects fabric, trend alignment, and style rating, but the core feature is the closet tracker — before you buy something new, WearWise tells you whether you actually need it. It compares incoming pieces against what you already own, flags near-duplicates, scores how much a new item genuinely adds to your wardrobe, and surfaces how often you actually reach for similar pieces. It's a reality check in your pocket.

How We Built it

We built WearWise with a Python backend that handles all the heavy lifting, clothing classification using a fine-tuned Vision Transformer from HuggingFace, fabric and garment identification via FashionCLIP, and computer vision analysis through OpenCV. Closet data is stored and exchanged as JSON, making it easy to persist a user's full closet state and pass it between the phone and the backend. The mobile app is built in React Native, which let us ship on both iOS and Android from one codebase. The phone camera feeds directly into the pipeline, you point, shoot, and the frame gets sent to the Python backend which returns a full garment analysis in seconds. Gemini 2.5 Flash powers the natural language advice layer on top.

Challenges we ran into

Getting reliable multi-garment detection from a single camera frame was the core technical challenge. Standard image classifiers return one label for the whole image, so we built a region-splitting system in Python that evaluates the top and bottom halves independently and merges the results, with a three-level confidence fallback so the app never returns empty on a real piece of clothing. Bridging React Native's camera output to the Python backend cleanly, and keeping the JSON responses fast enough to feel instant, required careful structuring of the data pipeline on both ends.

What we learned

You can extract a surprising amount of insight from a single phone camera frame with the right Python pipeline. Fabric, color, trend score, and wardrobe similarity, all from one photo, no special hardware needed.

Built With

Share this project:

Updates