Inspiration

Every year, millions of pets wait in shelters for a home. The adoption process is often overwhelming โ€” adopters scroll through endless listings without knowing which pet truly fits their lifestyle. PetMatch was born from the idea that technology and empathy can work together: instead of filtering by checkboxes, what if you could just describe your ideal companion in plain language and let AI do the rest? The project was also inspired by the challenge of building a real, production-grade full-stack system โ€” from database design to CI/CD โ€” as a learning milestone.

What it does

PetMatch revolutionizes pet adoption by leveraging AI to understand adopters' preferences through natural language. Users can describe their ideal pet in plain English or even speak their requirements using voice input. The system then:

  • Extracts preferences using Groq's LLaMA 3.3 70B model to parse species, breed, age, gender, and personality traits
  • Scores and ranks available pets using a weighted algorithm that considers multiple criteria
  • Displays matches with percentage compatibility scores and detailed pet profiles
  • Facilitates adoption through a streamlined application process with favorites, applications, and admin management

Shelter administrators can manage their pet listings, review adoption applications, and track adoption statistics through a dedicated dashboard.

How we built it

Architecture Overview

graph TD
    User["Browser / User"]
    FE["Frontend\nReact Router 7 + TypeScript\n(port 3000)"]
    BE["Backend\nLaravel 12 API\n(port 8000)"]
    DB["MySQL 8\n(port 3307)"]
    Groq["Groq Cloud API\nLLaMA 3.3 70B"]

    User --> FE
    FE -->|"REST / Axios"| BE
    BE -->|"Eloquent ORM"| DB
    BE -->|"HTTP (preference extraction)"| Groq

Backend (backend/)

Layer Technology
Framework Laravel 12 (PHP 8.2)
Auth Laravel Sanctum (token-based)
Database MySQL 8 via Eloquent ORM
AI Client Groq REST API (llama-3.3-70b-versatile)
Testing PHPUnit 11 (Feature + Unit)
Container Docker + Nginx

Key controllers:

  • AuthController โ€” register, login, logout, profile update
  • PetController โ€” CRUD for pets (admin-only write, public read)
  • PetMatchController โ€” orchestrates AI extraction โ†’ scoring โ†’ response
  • UserPreferenceController โ€” calls Groq API and normalises the JSON output
  • FavoriteController โ€” user favourites (toggle)
  • AdoptionApplicationController โ€” submit, view, cancel, admin status update

Frontend (frontend/)

Layer Technology
Framework React 19 + React Router 7
Language TypeScript 5
Styling Tailwind CSS 4
Animation Framer Motion
HTTP Axios + TanStack Query
Forms React Hook Form
Testing Vitest + Testing Library + Cypress
Container Docker (Node server)

Key routes:

  • /welcome-user โ€” AI matching landing page with voice input
  • /match-results โ€” scored pet cards from AI response
  • /pets-list โ€” browsable pet catalogue with filters
  • /admin/dashboard โ€” shelter admin panel (stats, pet management, applications)
  • /profile โ€” user profile with avatar selection

Data Flow โ€” AI Matching

sequenceDiagram
    participant U as User
    participant FE as Frontend
    participant BE as Laravel API
    participant Groq as Groq LLaMA

    U->>FE: Types or speaks description
    FE->>BE: POST /api/match-pets {user_message}
    BE->>Groq: POST /openai/v1/chat/completions (structured prompt)
    Groq-->>BE: Raw JSON preferences
    BE->>BE: Normalise & validate JSON
    BE->>BE: Score all available pets
    BE-->>FE: Sorted pets with match %
    FE-->>U: Animated match results page

Challenges we ran into

1. Prompt Engineering for Reliable JSON Getting the LLM to return only valid JSON โ€” no markdown fences, no explanations โ€” across multiple languages required many iterations. The final prompt uses explicit RESPONSE RULES and a regex fallback (preg_match('/\{.*\}/s', ...)) to extract the JSON even if the model adds noise.

2. Scoring Fairness โ€” The Age Problem Early versions of the scoring algorithm always included age in the maximum possible score, which meant pets were penalised for not matching an age range the user never specified. The fix was to conditionally include age in $maxPossibleScore only when the user actually provided an age constraint.

3. CORS & Sanctum Cookie vs. Token Auth Configuring Laravel Sanctum for a decoupled SPA (different origins in Docker) required careful tuning of config/cors.php and config/sanctum.php. The final approach uses Bearer token auth (not cookie-based SPA auth) to avoid cross-origin cookie issues.

4. Docker Networking Making the frontend container reach the backend container by hostname (backend) while the browser also needs to reach localhost:8000 required environment variable separation: VITE_API_URL for the browser vs. internal Docker DNS for SSR.

5. CI with SQLite vs. MySQL The production database is MySQL, but the CI pipeline uses SQLite (in-memory) for speed. Some MySQL-specific migration syntax had to be made compatible with SQLite, and the phpunit.xml environment overrides had to be carefully set.

6. Voice Input Browser Compatibility The Web Speech API (SpeechRecognition) is not available in all browsers and is not available in the test environment. The useVoiceInput hook gracefully degrades (isSupported = false) and the UI hides the microphone button when unsupported.

What we learned

Backend & API Design

  • Designing a RESTful API with Laravel Sanctum token authentication and role-based access control (user / admin roles via CheckRole middleware)
  • Writing Eloquent relationships across five models: User, Pet, Shelter, Favorite, AdoptionApplication
  • Implementing email verification and password reset flows with custom Laravel Notifications
  • Writing PHPUnit Feature and Unit tests with SQLite in-memory database

AI & NLP Integration

  • Calling the Groq API (LLaMA 3.3 70B Versatile) from a Laravel controller via Http::withHeaders()
  • Prompt engineering: crafting a strict, multi-language prompt that forces the LLM to output a clean JSON object with no markdown noise
  • Building a weighted scoring algorithm on top of the LLM output โ€” the math behind it is explained in ยง6

Frontend

  • Building a full SPA with React Router v7 (file-based routing, SSR-ready)
  • Managing global state with React Context (AuthContext, UserContext, ThemeContext)
  • Integrating the Web Speech API for voice input on the AI search form
  • Writing unit tests with Vitest + Testing Library and E2E tests with Cypress

DevOps & Tooling

  • Containerising both services with Docker (multi-stage builds, Nginx for the backend, Node server for the frontend)
  • Orchestrating with docker-compose (three services: backend, frontend, mysql)
  • Setting up a GitHub Actions CI pipeline that runs PHPUnit (with Codecov coverage upload) and Vitest on every push to main / devFinal

Built With

Share this project:

Updates