Patina
A creative canvas that analyzes your aesthetic taste as you curate, then searches, discovers, and gives you more content to explore.
Inspiration
As a photographer and aspiring creative, finding references and high-quality content to consume is incredibly important, but interesting content online spans a wide range of modalities which makes it hard to consolidate in one place. Screenshots, links, color palettes, songs, typefaces across tons of browser tabs. Tools like Pinterest support a subset of these modalities, but they treat these tools like static bookmarks. They don't understand what ties your collections together, and they can't help you find what's missing.
We wanted to build a tool where the act of curating is itself a creative input. Drop in the things that inspire you, and the system extracts the underlying aesthetic and uses that understanding to actively help you explore and create.
Discovery shouldn't be the bottleneck; developing your taste is.
What it does
Patina is an infinite canvas mood board where every reference you add contributes to a living "vibe profile." You can drop in images, paste URLs to articles or portfolios, embed YouTube videos and Spotify tracks, preview Google Fonts, or just type raw text. Each piece gets analyzed by Claude to extract its aesthetics and how it contributes to the board's creative direction.
As your board grows, Patina continuously recomputes a composite vibe from all your references, weighted by proximity to the center of your canvas. This composite vibe powers several features:
- Vibe-aware discovery: An AI-powered discovery deck suggests new references — images, articles, music, typography — that complement your existing collection's aesthetic. It uses Perplexity Sonar to search across domains you wouldn't think to look in.
- Vibe-aware search: Search the web through the lens of your board's aesthetic. Ask for "brutalist architecture" and only content that suits your board is returned.
- Deep interviews: The discovery system interviews you about your creative intent, asking targeted questions to refine its understanding of what you're looking for.
- Style guide generation: Materialize your board's vibe into a concrete brand style guide (color palettes, typography pairings, CSS variables, and design tokens) to add the vibe to any real project.
- Embeddable media: YouTube, Spotify, Vimeo, and SoundCloud content plays inline on the canvas. Google Fonts render live type specimens using your vibe narrative as sample text.
- Spacially-Aware Inspiration: Works influence each other based on proximity. Spacial positioning adds another layer of depth to curation.
- Content Remixing: Generative models allow you to combine various multimodal forms of content to spark new, novel inspiration.
How we built it
Patina is a Next.js application with a React Flow (XYFlow) canvas at its core. The frontend uses Zustand for state management with localStorage persistence, Framer Motion for animations, and Tailwind CSS v4 for styling.
The vibe extraction pipeline sends each reference to Claude (Sonnet 4.5), which analyzes images, URLs (including their og:image previews), and text to analyze the media's colors, aesthetic, and "vibe". These per-node contributions are merged into a composite vibe using proximity-weighted blending.
The discovery engine calls Perplexity Sonar with prompts constructed from the composite vibe profile, requesting cross-domain creative references. An interview system uses Claude to generate targeted follow-up questions that refine discovery results. Style guide generation sends the full vibe profile to Claude with a detailed prompt to produce production-ready design tokens and brand guidelines.
URL nodes fetch metadata via Cheerio for server-side HTML parsing, extracting og:image, title, and body text. OEmbed integration handles embeddable URLs from YouTube, Spotify, Vimeo, and SoundCloud.
Challenges we faced
- Iframe embedding doesn't work for most sites — CSP and X-Frame-Options headers block it. We pivoted to using og:image as visual previews instead, which ended up looking better anyway.
- Tailwind classes not applying inside React Flow nodes — React Flow's measurement system interfered with certain Tailwind utilities like
max-h-[240px]. We solved this with targeted inline styles where needed. - Vibe extraction accuracy for URLs — Text-only analysis missed the visual character of a site. Including the og:image as a visual input to Claude dramatically improved color and mood extraction.
- Balancing extraction latency with UX — Vibe extraction calls take a few seconds per node. We implemented async extraction with loading states and deferred composite recomputation to keep the canvas responsive.
- Recommendation accuracy — Making sure that recommendations pushed past surface-level content was difficult but made significantly easier thanks to Perplexity's Sonar Pro model.
Accomplishments that we're proud of
The overall UI/UX of the project came out looking really nice! Everything is intuitive, quick, and feels polished. For a creative tool, this was really something that I wanted to make sure I got right. Since positioning and content suggestion feels seamless, users can focus on curation rather than fighting against the interface.
The recommendation engine also works incredibly well and consistently provided me with new articles, works of art, and songs that fit the vibe I was going for perfectly and proved to be quite interesting at the same time.
What we learned
In this current AI age, taste is a commodity that should be respected. Powerful AI applications can generate anything, but none can replace human judgment; they merely amplify one's taste. The system works because it respects individual curation decisions as the source of truth and merely provides suggestions that push aggregated materials into useful and potentially novel directions.
Aesthetic understanding is also surprisingly tractable for large language models when you give them structured output formats and visual inputs. Claude's ability to extract consistent, usable color palettes and mood descriptors from diverse inputs (photos, articles, music, typography) was better than expected.
What's next for Patina
Adding the ability to directly share content from inside any app and drop it into the board would make the experience significantly more seamless. Unfortunately this is a lot of integrations that I did not have time to build. Adding a sequential node system would also be really interesting, where elements higher in the tree/chain of elements have a stronger influence on the next set of generated content.
Collaborative boards and more tool-export utilities would be awesome (e.g. LUTs for videos, Tailwind config files, Figma design plugin, etc.).
Built with
- Next.js 16 — React framework with App Router and API routes
- React 19 — UI rendering
- XYFlow (React Flow) — Infinite canvas with draggable nodes
- Zustand — Lightweight state management
- Framer Motion — Animations and transitions
- Tailwind CSS v4 — Styling
- Claude (Anthropic API) — Vibe extraction, style guide generation, discovery interviews, vibe narratives
- Perplexity Sonar API — Cross-domain vibe-aware web search and discovery
- Cheerio — Server-side HTML parsing for URL metadata extraction
- TypeScript — Type safety throughout
- Vercel — Deployment
Built With
- cheerio
- claude-api-(anthropic)
- digitalocean-spaces
- framer-motion
- next.js
- perplexity-sonar-api
- react
- suno-api
- tailwind-css
- typescript
- vercel
- xyflow-(react-flow)
- zustand
Log in or sign up for Devpost to join the conversation.