Inspiration
Have you ever bought a cute top only to have it sit in your closet because you don’t know what to wear it with? We’ve all experienced the “I have nothing to wear” dilemma — not because we lack clothes, but because styling them is time-consuming, uncertain, and often frustrating.
What it does
wearIt helps users generate outfit ideas based on an item from their wardrobe. By uploading a photo of a clothing item, the app analyzes the image using Google Cloud Vision and Together AI to identify the garment type, color palette, and aesthetic vibe. Then, using weather data and natural language generation, it creates a smart query to search Pexels for real outfit inspirations tailored to the user’s current environment and style.
How we built it
We built wearIt as a full-stack, mobile-first styling assistant using React Native with Expo, designed for seamless cross-platform deployment. The app flow is driven by user-uploaded wardrobe photos, which are processed through a multi-stage AI pipeline. Uploaded images are first sent to Google Cloud Vision API for label detection and color analysis. The raw image data is then transformed into descriptive fashion-focused prompts using Together AI’s Mistral-7B language model, allowing us to generate concise and context-aware outfit descriptions. We then integrate real-time location and weather data using Open-Meteo, so that the generated search queries reflect current conditions. The tailored outfit query is sent to the Pexels API, returning a curated collection of real-world outfit images that match both the item and the context. We implemented ReadyPlayerMe avatar generation to offer a virtual try-on experience. Users can create avatars resembling themselves and preview how different outfit combinations would look, making the styling process more immersive, inclusive, and empowering. To support product discovery, we integrated the Google Shopping API via SerpAPI, allowing us to return real-time product recommendations based on the items users virtually try on. These results are aggregated, formatted, and styled inside our Promise-based parallel API calls. All wardrobe images are uploaded to AWS S3, and navigation + app state is managed with React Navigation, with Expo’s asset management pipeline optimized for mobile responsiveness and performance. We blend CV, NLP, weather intelligence, virtual try-ons, and real product discovery into one cohesive mobile experience, available at your fingertips.
Challenges we ran into
- Integrating LLMs into a mobile workflow: While Together AI’s Mistral-7B model is powerful, adapting it into a real-time React Native pipeline was complex. We had to carefully engineer prompts to ensure meaningful, short-form output that could be reliably passed to other APIs without hallucinations or extra formatting noise.
- Parsing LLM outputs for downstream APIs: Since LLMs return freeform text, extracting clean, actionable data required building fallback logic to prevent broken queries or irrelevant results. We had to balance creativity with structure.
- Managing async chaining of multiple APIs: Our app relies on a sequence of dependent API calls — Google Vision → LLM prompt → weather fetch → Pexels → product recommendation. Timing, error handling, and state management were particularly tricky, especially with Expo’s asynchronous rendering and mobile performance constraints.
- Cross-platform image handling: Uploading, converting, and passing image blobs from the camera roll to AWS S3 introduced multiple layers of complexity between iOS and Android handling.
Accomplishments that we're proud of
- Built a full-stack, AI-powered fashion app in under 24 hours, complete with real-time image upload, wardrobe analysis, outfit generation, and product recommendations.
- Successfully integrated multiple advanced APIs including Google Cloud Vision, Together AI’s Mistral-7B, Pexels, SerpAPI for Google Shopping, and AWS S3 within a React Native mobile app.
- Used an LLM to dynamically generate smart, fashion-relevant search queries based on real wardrobe photos, enabling a pipeline from item analysis → query generation → curated outfit results.
- Made outfit suggestions context-aware by incorporating real-time location and weather data so styling suggestions actually make sense for your current day.
- Implemented avatar-based virtual try-ons using the ReadyPlayerMe API, allowing users to generate a digital model of themselves and preview how outfits might look based on their body type and style.
- Designed wearit to be inclusive, accessible, and empowering, focusing on body positivity, self-expression, and real-life wearability, not trends or fast fashion.
- Engineered a smooth and aesthetic user flow across multiple screens in React Native, including image uploads, async fetches, AI analysis, and styled result displays, all optimized for mobile.
What we learned
- How to use LLMs (like Mistral) to generate meaningful outputs that interact with visual data
- Best practices for integrating and chaining APIs in a mobile environment.
- Working with AWS S3 and React Native's file system for image handling.
- Handling app state and navigation with Expo and React Navigation.
What's next for wearit
- Adding user accounts and a persistent digital closet to save uploaded items.
- Integrating style preferences (casual, edgy, formal, etc.) for even more tailored recommendations.
- Expanding our AI model to include event-based suggestions like “picnic” or “date night.”
Built With
- axios
- expo.io
- google-cloud
- javascript
- mistral-7b-instruct-v0.1
- pexels
- react-native
- readyplayerme
- s3

Log in or sign up for Devpost to join the conversation.