About This Project Inspiration The idea came from the intersection of two cultural fascinations — traditional Chinese face reading (面相学) and the modern obsession with beauty standards and celebrity comparisons. Face reading has been practiced for thousands of years as a way to understand personality and fortune through facial features. I wanted to bring this ancient art into the digital age by pairing it with AI, making it accessible, fun, and available in both English and Chinese.

What It Does The app offers two modes:

Face Reading — Upload a front-facing photo and receive a personality analysis, MBTI prediction, and a 2026 Horse-year fortune outlook, along with reference cases of well-known figures who share similar facial characteristics. Beauty Score — Get a candid 1–10 beauty rating with detailed reasons explaining the score, plus celebrity look-alike matches. How I Built It The stack is React + TypeScript on the frontend with Tailwind CSS and shadcn/ui for a clean, Google-inspired design system. The backend runs on Express.js with a PostgreSQL database managed through Drizzle ORM.

The core intelligence comes from Google Gemini AI (gemini-3-flash-preview), accessed through Replit's AI Integrations. The model receives a base64-encoded image along with a carefully structured prompt, and returns a JSON response that the app parses into distinct result cards — personality traits, MBTI type, fortune outlook, beauty scores, and celebrity matches.

Key architectural decisions:

Bilingual support ($L_{\text{lang}} \in {\text{en}, \text{zh}}$) is handled inline with translation objects rather than an external i18n library, keeping the bundle small. Light/dark theming uses CSS custom properties with Tailwind's class strategy, toggled via a simple React state with localStorage persistence. Results are displayed in separate, non-nested cards for clean visual hierarchy. Challenges Structured AI output — Getting Gemini to consistently return valid, parseable JSON with the exact fields needed (traits, MBTI, scores, celebrity names) required iterative prompt engineering. Edge cases like unusual photos or low-quality images needed graceful fallbacks.

Bilingual prompting — The AI needs to respond in the user's selected language, which means the prompt itself must adapt. Balancing prompt clarity in both English and Chinese while maintaining consistent output structure was tricky.

Responsive design without overlap — Ensuring the upload areas, result cards, and header controls all fit cleanly on screens from 375px mobile to full desktop, in both languages (Chinese text has different line-length characteristics), required careful layout testing.

Dark mode consistency — Every surface, border, and text color needed to work across both themes. Using semantic design tokens from shadcn/ui helped, but custom elements like the upload area's dashed border and icon backgrounds required manual dark-mode tuning.

What I Learned Multimodal AI is remarkably capable at facial analysis when given well-structured prompts, but prompt design is as important as code design. Building for two languages from day one is far easier than retrofitting — it forces you to think about text length, layout flexibility, and cultural context early. Keeping the component tree flat (no nested Cards, minimal wrapper divs) leads to both better performance and a cleaner visual result.

Built With

  • replit
Share this project:

Updates