Inspiration

Fashion should feel good, not just look good. We were inspired by how hard it is for many people to find clothing guidance that respects comfort, mobility, sensory needs, and sustainability at the same time. Most styling tools focus on trends or shopping; we wanted a supportive wardrobe companion that feels inclusive and practical in real life.

What it does

FashionABLE is an AI-powered smart mirror + wardrobe assistant. You can talk to it by voice or text, get outfit recommendations, and generate virtual try-on images from your webcam. It includes a wardrobe browser, imported-item support (upload image + description), a tagged look gallery with metadata, and a simple marketplace flow for giving items away. The chat panel is collapsible/resizable, and the mirror preview is draggable for a smooth demo experience.

How we built it

We built FashionABLE with React, TypeScript, Vite, and Tailwind CSS.
Core AI features use:

  • Gemini AI for stylist conversation and image generation
  • Gradium for voice (TTS/STT), with browser fallback

We structured logic into focused hooks/services (useVoice, useStylist, useWebcam, useWardrobeCatalog) and wired persistence through Firebase (where available) plus localStorage fallback for reliability.

Challenges we ran into

The hardest parts were reliability and real-time UX:

  • Voice latency and turn-taking (knowing when listening starts/ends)
  • API/provider edge cases and fallback handling
  • Keeping UI dense but usable while adding many features
  • Making AI outputs deterministic enough to map into real wardrobe items

Accomplishments that we're proud of

We’re proud that the experience is genuinely end-to-end: voice in, styling logic, visual try-on output, and actionable wardrobe management. We also shipped meaningful UX polish (resizable panel, draggable preview, gallery filters, compact controls) and made the system resilient with fallbacks so demos keep working under imperfect conditions.

What we learned

We learned that production-grade AI apps are mostly about orchestration, state management, and failure handling. Clear user feedback (“starting mic…”, loading states, status cues) matters as much as model quality. We also learned to combine prompt design with deterministic client logic to keep the experience consistent.

What's next for FashionABLE

Next, we want to:

  • Deepen personalization (style history, comfort profiles, context-aware recommendations)
  • Improve imported-item intelligence (auto-tagging, better garment understanding)
  • Add stronger sharing/collaboration and cross-device sync
  • Expand sustainability insights (cost-per-wear, donation impact, capsule planning)
  • Continue refining guardrails so recommendations stay accurate for specific demo/user contexts

Built With

Share this project:

Updates