Inspiration
Fashion decisions are inherently visual, driven by color harmony, contrast, and personal style. These elements are not universal - the same color can look striking on one person and unflattering on another, making complexion and individual features essential to meaningful styling.
Most existing fashion tools rely on manual tagging or generic recommendations and fail to truly understand clothing images or how colors interact with different individuals. We were inspired to explore how multimodal AI could visually reason about outfits the way humans do, by understanding both the person and the clothing together.
The Google Hackathon gave us the opportunity to build this using Gemini 3, enabling an intelligent and explainable styling system rather than one-size-fits-all recommendations.
What it does
LookBook AI allows users to upload clothing images and receive personalized, color-aware fashion insights. The system analyzes dominant colors, style cues, and visual compatibility directly from images to help users make better styling decisions without relying on predefined rules.
How we built it
We built LookBook AI using a Gemini 3–powered multimodal pipeline that processes clothing images and reasons across visual and textual features. The model analyzes colors, contrast, and clothing categories to generate real-time, explainable fashion insights focused on visual understanding.
Challenges we ran into
One of the main challenges was ensuring consistent outputs across visually similar images, as small prompt variations could significantly affect results. We also faced challenges in balancing response speed with deeper multimodal reasoning while keeping the insights clear and user-friendly.
Accomplishments that we're proud of
We successfully built an end-to-end AI system that goes beyond basic image recognition to deliver intelligent, color-aware fashion analysis. Integrating Gemini 3 into a real, user-facing application and generating meaningful, explainable insights was a major achievement.
What we learned
Through this project, we gained hands-on experience with multimodal generative AI and image-based reasoning. We learned how Gemini 3 interprets images alongside text and how important prompt design, output consistency, and explainability are when building AI-powered applications.
What's next for LookBook AI
We plan to expand LookBook AI with advanced outfit matching, personalized style profiles, and seasonal recommendations that adapt to changing trends and contexts. Future improvements will also focus on consistency across recommendations, support for multiple image inputs, and deeper visual understanding of outfits.
We also aim to introduce an intelligent styling agent that users can interact with directly - allowing them to ask whether certain colors or clothing pieces work well together, explore creative combinations, and build cohesive looks. This agent would act as a personal stylist, helping users not only follow trends but create their own.
As social media increasingly turns everyday users into fashion influencers, LookBook AI has the potential to become a creative partner - empowering individuals to experiment confidently, express their style, and translate AI-powered fashion intelligence into real-world impact.
Log in or sign up for Devpost to join the conversation.