Inspiration

In Pakistan, the skincare market is a minefield. We are flooded with unregulated "whitening" creams often containing mercury and high-concentration steroids, yet international apps recommend products that aren't available in our local stores or suited for our humid, South Asian climate.

Standing in a store aisle, most people are paralyzed by choice. I wanted to solve this "Analysis Paralysis" not with another generic chatbot, but with a decision-support engine. I built SkinSight to act as a hyper-local, safety-first shopping assistant that bridges the gap between scientific ingredient safety and local market availability.

What it does

SkinSight is not just a scanner; it is a multimodal agentic workflow designed to guide a user from confusion to purchase:

  1. Visual Diagnostics (The "Eyes"): Users upload a selfie, and the Vision Agent (powered by Gemini 1.5 Flash) analyzes skin texture, oiliness, and congestion levels, creating a persistent user profile optimized for our humid climate.
  2. Ingredient Safety Layer (The "Brain"): When a user photographs a product label, the Safety Analyst Agent extracts text via OCR and cross-references ingredients against the user's specific skin profile. It doesn't just say "Safe" or "Unsafe"—it acts as a shopping assistant, flagging irritants like denatured alcohol that might trigger oily skin.
  3. Local Market Sourcing (The "action"): Unlike generic AI that hallucinates unavailable brands, SkinSight uses Google Search Grounding to find the recommended alternatives in real Pakistani e-commerce stores, closing the loop between diagnosis and purchase.

How we built it

We adopted a "Security-First" and "Mobile-First" design philosophy:

  • The Orchestrator: We used Gemini 1.5 Pro to handle the complex reasoning required to interpret chemical ingredient lists and differentiate between "medicinal" and "cosmetic" concerns.
  • Frontend Architecture: Built on React and Vite for speed, using a custom Tailwind CSS design system ("Gen-Z Glow") to ensure the app looks professional and consistent across all device sizes.
  • State Management: We implemented a persistent chat history using local storage so the "Shopping Assistant" remembers the user's skin context across different sessions, mimicking a real human interaction.
  • Responsible AI: We heavily engineered the System Instructions to act as strict guardrails, ensuring the AI refuses to provide medical diagnoses (preventing hallucinations) while remaining helpful for cosmetic shopping decisions.

Challenges we ran into

The biggest challenge was balancing utility with safety. Early versions of the model tried to "diagnose" acne as a medical condition, which violates Responsible AI principles. We had to refine the System Prompts extensively, instructing the model to act as a "Personal Shopper" rather than a "Doctor." We moved from "Risk Analysis" to "Suitability Scores," which transformed the app from a scary medical tool into a helpful consumer product.

Accomplishments that I'm proud of

I am incredibly proud of the "Local Context" engine. Getting an LLM to understand that a moisturizer popular in the US might be too heavy for Sargodha's humidity—and then suggesting a locally available alternative like Jenpharm or Vince—was a huge technical win. It turns the AI from a novelty into a genuine utility for my community.

What's next for SkinSight

The foundation is laid for a complete Agentic Commerce Platform.

  • Video Generation: I plan to integrate video generation agents to create personalized "How-To" routine guides dynamically based on the user's products.
  • Dermatologist Verification: Partnering with local doctors to create a "Verified" dataset to fine-tune the model further.
  • Affiliate Integration: Directly linking the "Find Local Price" feature to retailer APIs to create a sustainable business model.

Built With

Share this project:

Updates