Inspiration:
What our group decided to do, was add a fun take to a complication people live through daily. Every day, 65% of Americans feel uncertain about what they want to wear. What we did was remove that stress, but in a cool and unique way, quite literally breaking the norm. We developed an application that lets you virtually try on clothes in real time, helping people really lock in and decide on what they want to wear, and even expanding their taste in new styles.
Technology Used: For our Frontend we used: React, Typescript, Vite, Auth0 SPA SDK, MediaPipe Holistic CSS For our Backend we used: Python, Flask, Auth0 JWT Validation, Node.js + npm For our Database we used: Local SQLite, For our External APIs we used: Cloudinary API, Fashn AI API, OpenWeatherMap API, ElevenLabs API, RemoveBg API
The Features:
FitDeck lets you generate real combined outfits using the Fashn AI API. We allow you to pick and view your outfit in a fun tinder styled fashion, using neural hand tracking networks. Swiping left discards the outfit, swiping right saves the outfit, and balling your hand into a fist allows you to see what the selected outfit looks like on you, generating a photo of the user in said outfit, again using Fashn AI’s API. In case the user feels overwhelmed by thinking of what clothing items to add into FitDeck, we have supplied a very basic clothing catalog, allowing you to test new styles, and see how they look on you, as opposed to going in store to try them on. We have also integrated a beta chatbot model that summons once you call it, similar to Iron Man’s “Hey, Jarvis!”. The way our chatbot works is, you say the words “Hey FitBot!”, and it appears, aiding the user while looking at the outfits, instead of walking back to the computer each time they have questions. The chatbot uses a Gemma Model, and then speaks to you using an Elevenlabs Voice Model. An example we tested was; “Hey FitBot! Is this outfit appropriate given the weather in Mississauga?", and it would reply accordingly. This model requires some more testing, and optimization, as I will explain in the “challenges we ran into” segment.
How We Built It:
We built FitDeck as a full-stack web app that combines a modern React frontend with a Flask API backend. The experience is centered around three pillars: wardrobe + outfit building, FitBot (voice-first styling assistant), and virtual try-on / gesture-based interaction.
Challenges We Ran Into:
Realistic try-on is GPU/provider-dependent and introduces async workflows (polling, image upload, result hosting). Integrating this cleanly while keeping the UI feeling realtime was beyond the scope of what could be accomplished at this hackathon, as we realized during development.
Accomplishments We’re Proud Of:
We were able to build a full-stack web application with a successful neural network integration and augmented reality within 36 hours. Turning decision-making for figuring out outfits from stressful to an interactive convenient AI tool.
What We Learned:
Using the Fashn.ai API for image generation and learning how to fine-tune specific image input contexts. Discovering how many options you have when building the user interface to see how we can push the limits of interactivity. Building something that users can feel the need to actually use that solves a daily problem.
What’s Next For FitDeck:
Next, we want to turn FitDeck from a strong prototype into a polished product by tightening the loop: upload wardrobe items, build fits faster, and make recommendations feel a lot more personal. That means improving the wardrobe (better background removal, auto-tagging category/color/material), enriching each item with metadata, and adding a smarter outfit engine that learns from saves, skips, and feedback so suggestions get better over time.
On the FitBot side, we’d expand beyond “chat” into a real styling copilot: proactive suggestions based on weather, calendar/occasion, and what’s actually in your closet, with more reliable voice wake. We’d also make model selection and provider fallbacks better so changes in third‑party model availability don’t break the experience, and we’d add guardrails around latency, cost, and response quality.
Finally, for virtual try-on, the next step is accuracy and realism. We’d build a sizing/calibration flow, improve hand/pose tracking stability, and refine the try-on pipeline so results are consistent across lighting and camera angles. We’d also streamline the “generate” flow with better progress feedback, caching, and a gallery of saved try-on results, so users can compare looks, share them, and return later without re-generating.
Built With
- auth0
- cloudinary
- elevenlabs
- fashn.ai
- flask
- mediapipe
- ogl
- openweathermap
- pillow
- python
- react18
- sql
- tailwind
- three.js
- typescript
- vite
- zustand
Log in or sign up for Devpost to join the conversation.