Inspiration
As generative AI becomes more prevalent, artists are increasingly seeing their work replicated, remixed, or mimicked without credit, consent, or visibility. While AI-generated art tools grow more powerful, the original creators whose styles inspire these systems are often left invisible. We wanted to flip that dynamic. Muse was born from the idea of giving power back to artists. Instead of using AI to replace artists, we use AI to trace style back to people. If someone encounters an artwork and feels drawn to its visual language, Muse helps answer a different question than “how do I generate more of this?”: Who actually creates work like this?
What it does
Our app helps users find artists by matching them based on visual style, not just tags or keywords. Users can discover artists in multiple ways:
- Upload reference images to find artists with similar visual styles
- Upload multiple images to build a personalized taste profile that represents what they like overall
- Describe the kind of art they want in natural language, which the system translates into visual style and subject signals
- Browse AI-generated style clusters to explore artists without writing a query
Under the hood, we model artistic style using three core visual factors: color, texture, and structure. These features are extracted directly from artwork images. The system learns which of these factors are most consistent within each artist’s portfolio and treats them as more important for defining that artist’s style. Users can then adjust sliders to control how much each factor matters to them personally. Results are re-ranked in real time, with a short explanation of why the ordering changed. When users search, the system combines:
- a learned global definition of artistic style
- a personalized taste profile derived from user uploads or descriptions
How we built it
We built a cross-modal art discovery platform using a modern full-stack architecture that combines machine-learning embeddings, natural-language understanding, and a responsive React frontend.
Backend (FastAPI + Python) CLIP (ViT-B/32) generates shared image and text embeddings, enabling true cross-modal search (image ↔ text). Qdrant stores artwork embeddings and powers fast cosine-similarity retrieval. Gemini 1.5 Flash parses natural-language descriptions into structured visual attributes and generates human-readable explanations for why artists match a query. Multi-factor ranking combines semantic similarity, style and subject tags, object overlap, and style category matching.
Frontend (React + Vite) Supports image-based search and natural-language discovery. Includes a Browse Styles view for AI-discovered categories. Uses React Router for navigation and session storage to persist search state. Results update dynamically with similarity scores and Gemini-generated explanations.
Data Architecture
We initially explored building our dataset through web scraping and implemented ingestion code to collect artwork and metadata automatically. However, we found that many platforms either lacked open access, had restrictive terms, or did not provide reliable, structured data suitable for ethical use. As a result, we pivoted to a curated artist database, containing artwork samples, style and subject tags, style categories, and social metadata. This approach allowed us to ensure data quality, transparency, and ethical sourcing while enabling both semantic (embedding-based) and keyword-based matching for accurate, interpretable results.
Challenges we ran into
One major challenge was designing the machine learning heuristics and ranking logic behind our recommendations. Deciding how to weight different signals—semantic similarity from embeddings, style tags, subject overlap, and visual attributes—required experimentation and intuition rather than clear-cut answers.
Small changes in weighting could significantly affect results, and there was no single “correct” configuration. Balancing accuracy, interpretability, and consistency under hackathon time constraints was especially difficult.
This forced us to think carefully about how machine learning systems often rely on practical heuristics and trade-offs, not just models, and how much product judgment is involved in making AI outputs feel right.
Accomplishments that we're proud of
We’re proud of building a fully functional cross-modal art discovery system end-to-end within a hackathon timeframe. The platform successfully connects natural-language descriptions and images to real artists through a unified embedding space.
We’re especially proud of reframing AI as a tool for artist attribution and discovery rather than generation, using machine learning to route attention back to creators instead of replacing them. It is a projet that we chose out of interest and passion, and we believe this reflects in our work.
Furthermore, as relatively new hackers, this was a extremely challenging but fulfilling project, and we are super happy to have finished it.
What we learned
We learned that building effective AI systems goes far beyond choosing the right models. Much of the real work lies in designing heuristics, trade-offs, and ranking logic that make results feel intuitive and trustworthy to users.
From a product perspective, we learned that AI features are most impactful when paired with a clear ethical intention. Framing our system around artist discovery and attribution shaped both our technical decisions and user experience.
What's next for Muse
Next, we want to expand Muse’s ability to support and empower artists directly by allowing creators to claim profiles, link their work, and control how their styles are represented. Users would be able to reach out to interested artists and complete actual commissions with all interactions and transactions regulated on Muse. On the technical side, we plan to scale beyond a curated dataset, ingesting larger and more diverse artist collections while improving clustering and ranking robustness. This was attempted through web scraping however, it was hard to find open source art gallery that fit our project.


Log in or sign up for Devpost to join the conversation.