Inspiration
Fashion is a powerful form of self-expression, but choosing what to wear isn’t always easy. There are moments when you want to present yourself differently, whether it’s dressing up for a networking event, going out with friends, attending a party, or simply matching a new vibe or aesthetic. Even with a full closet, decision fatigue and uncertainty often get in the way.
We built Sensibility to help people confidently express their identity through fashion by making outfit decisions effortless, personalized, and adaptive over time.
What it does
Sensibility is an AI-powered fashion assistant that recommends outfits tailored to a user’s personal style and aesthetic.
Users can:
- Take and upload photos of their own clothing
- Choose or describe an aesthetic or occasion
- Receive outfit recommendations using items they already own
- Rate outfits so the system learns and improves over time
- User page gives descriptive analysis of their preferences to their style, what they dislike and like, and the AI adapts to your style
- Gives Users wardrobe gap analysis
The more users interact with Sensibility, the better it understands their preferences, creating increasingly personalized recommendations.
How we built it
We built Sensibility as a full-stack, mobile-first application using:
- Frontend: React Native with Expo (TypeScript)
- Backend: FastAPI (Python)
- Computer Vision: CLIP (for vision-language similarity and style matching), BLIP (for image captioning and garment understanding)
- Large Language Models: OpenAI GPT-4o-mini (style labelling, explanations, recommendations)
- Anthropic Claude 3 Haiku (behavioural pattern and preference analysis)
- Machine Learning: PyTorch
- AI Orchestration & Memory: Backboard.io for multi-model switching and adaptive memory
- Created REST APIs to integrate the backend with the frontend
- Supabase (for database)
This multi-model pipeline allows us to select the best AI model for each task, from image understanding to behavioural analysis.
Challenges we ran into
- Learning how the CLIP model works and how to effectively use vision-language embeddings
- Building a React Native app for the first time
- Working with computer vision, which was new to our team
- Integrating the frontend and backend, including handling network and URL configuration issues
- Understanding and properly implementing Backboard.io for model switching and persistent memory
Despite these challenges, we successfully built a functioning end-to-end product.
Accomplishments that we're proud of
- Building a fully functional mobile app from scratch
- Successfully integrating multiple AI models into a single pipeline
- Implementing adaptive memory so the system learns from user interactions
- Learning and using Backboard.io effectively for AI orchestration
- Creating a real, usable product that improves with every interaction
What we learned
- React Native and mobile app development
- Computer vision fundamentals with CLIP and BLIP
- Multi-model AI orchestration using Backboard.io
- Designing AI systems that learn from behavioural data, not just static inputs
- Bridging frontend UX with backend AI systems
What's next for Sensibility
Next, we plan to expand Sensibility with deeper style analytics and long-term style evolution tracking. Our goal is for Sensibility to not just recommend outfits, but to grow alongside the user, understanding how their identity, preferences, and style change over time.
Future features include:
- Advanced style insights and wardrobe gap analysis
- Virtual try-on experiences
- Long-term trend tracking and personalized style evolution dashboards
- Discover suggested items online that match their style
Here is our video: https://youtube.com/shorts/QHlproECX-0 https://drive.google.com/file/d/1nT3vSI0piPgZ2Xu92nDlrDeDo4sFsbJP/view?usp=sharing
Built With
- backboard
- blip
- claude
- clip
- llm
- openai
- python
- pytorch
- react-native-expo
- restapi
- supabase
- typescript
Log in or sign up for Devpost to join the conversation.