Inspiration

Solving My Own Closet Blindness I have enough clothes. My closet is full, yet every morning I faced the same paradox: standing in front of a mountain of fabric and feeling like I had absolutely nothing to wear. I’m not someone who follows trends or spends hours on fashion blogs, but I still want to look good and feel confident.

The gap wasn't a lack of clothes; it was a lack of information and vision. I didn't know how to match a sky blue cotton tee with anything other than jeans. I needed a way to see the potential of my own wardrobe without the manual labor of trying everything on. Dressya was born from this personal frustration—to build a Stylist in my Pocket that uses the world’s most advanced AI reasoning to make me look fashionable, effortlessly.

What it does

Dressya is a personal AI stylist designed for the fashion-indifferent. It turns a cluttered physical closet into a streamlined digital studio, helping users look their best without the mental overhead of traditional styling.

Intelligent Digitization: Users snap a quick photo of their clothes. Dressya uses the ClipDrop API to strip away messy backgrounds and the Gemini 3 Pro Image model to extract granular metadata including color, fabric, silhouette, and vibe.

Contextual Outfit Generation: Instead of just random pairings, Dressya asks: Where are you going? Whether it's a Business Meeting, Casual Outing, or Formal Event, the app’s reasoning engine analyzes your entire wardrobe to suggest a cohesive look.

The Stylist’s Reasoning: Powered by Gemini 3 Pro Preview, the app doesn't just show you clothes; it provides Style Advice. It explains why the items match, using color theory and silhouette balancing to build user confidence.

Sustainable Wardrobe Management: By focusing on Shopping your own closet, Dressya increases garment utilization. It helps users rediscover forgotten items, directly combating the environmental impact of fast-fashion overconsumption.

How I built it

A Dual-Brain Architecture Building Dressya required a sophisticated pipeline to turn messy bedroom photos into high-fidelity fashion data. I utilized two distinct Gemini 3 models to handle the heavy lifting:

The Vision (Gemini 3 Pro Image): I integrated the ClipDrop API as my primary pre-processor to remove backgrounds and isolate the garment. This clean image is then fed into Gemini 3 Pro Image (internal codename Nano Banana Pro). This model acts as the eyes, extracting over 10 points of granular metadata, from silhouette to fabric weave.

The Brain (Gemini 3 Pro Preview): Once the data is in Firestore, I use Gemini 3 Pro Preview for the high-level reasoning. It analyzes the entire wardrobe array to suggest outfits based on specific Event Types.

Challenges I ran into

The Hidden Billing Trap: One of the biggest hurdles was managing a $100 cloud bill that appeared even while using the Free Tier API. I learned that while the AI tokens were free, the always-on infrastructure (Cloud Function minInstances) and Artifact Registry storage were draining my budget. I had to optimize my index.ts and switch to a serverless on-demand model. Although the billing stopped my progress as i could not finally deploy some fixes i have done because of my financial constraints.

Base64 Bloat: The front end was sending massive Base64 data strings. This increased latency and costs. I learned to optimize the data flow to ensure the ClipDrop-to-Gemini pipeline remained snappy and cost-effective.

Prompt Engineering vs. Reasoning: Initially, my prompts were too simple. I realized that to truly look good, I needed to evolve my instructions from Analyze this shirt to a multi-step reasoning path that considers color theory and vibe.

Accomplishments that I'm proud of

Dual-Model Orchestration: I successfully implemented a high-performance Dual-Brain architecture. I offloaded visual identification to Gemini 3 Pro Image and high-level stylistic reasoning to Gemini 3 Pro Preview, ensuring each model played to its specific strength.

The Clean-Room Pipeline: I built a seamless pre-processing pipeline that bridges ClipDrop and Google Genkit. This ensures that the AI sees only the garment, significantly increasing the accuracy of the metadata extraction compared to raw, unedited photos.

From Fashion-Blind to Fashion-Forward: I am most proud of turning a personal pain point—the struggle to style myself—into a functional tool. I am building an app that I actually use to get ready in the morning, proving that AI can bridge the gap between technical skill and personal style.

Zero-Shot Style Accuracy: Achieving a high degree of matching accuracy using Zero-Shot Prompting. By carefully engineering the system prompts with Style Personas, I enabled Gemini 3 to act as a world-class stylist without needing a custom-trained fashion dataset.

What I learned

This hackathon taught me that AI Fluency is the new superpower. It’s not just about calling an API; it’s about understanding how to orchestrate multiple models to work together. I discovered that even if you don't care about fashion, you can use technology to bridge that gap and solve a problem that contributes to both personal confidence and global sustainability by reducing waste.

What's next for Untitled

For Dressya, i intend to create more robust versions that can create a 3d model of users and fit the matched/ suggested clothes on them when they click on try it on, i also intend to allow the app to scan the users gallery so as to add their clothing withouth the hurdle of adding one after the other provided they have added a facial image. i will also include styling preferences at sign up stage. I would love for dressya to go global and help people like me and beyond.

Built With

Share this project:

Updates