🐾 : Reconnecting Lost Pets with our Matching Algorithm
Inspiration
It all started when I lost my cat. I wandered the neighborhood for many days taping paper posters to telephone poles. I relied entirely on blurry photos and a phone number, hoping someone would spot her and actually call. I waited for weeks hoping I would get a call from a stranger who had found my cat. I needed a way to use technology for people to contact and be notified instantly, so no one else would have to feel that helpless waiting game.
What it does
PawScout is an intelligent lost and found platform that uses Google Gemini AI to automatically analyze images of pets.
Smart Tagging When a user uploads a photo of a lost or found pet, our system doesn't just store the image; it sees it. It automatically extracts key features: species, specific breed, primary colors, and distinguishing marks.
Automated Matching The backend runs a matching algorithm that compares these AI generated tags against our database.
Instant Notifications If a match is found (e.g., a "Found" Golden Retriever matches a "Lost" Golden Retriever report), the original owner is instantly notified with details and a location.
How we built it
We built PawScout as a robust full stack application, focusing on speed and accuracy.
Frontend Built with Next.js and TypeScript for a responsive, type safe user interface. We used Tailwind CSS to ensure the design was accessible and mobile friendly. Authentication Integrated Auth0 for secure and seamless user login. Backend A high performance FastAPI Python server handles our business logic. AI Engine We leveraged Google Gemini 3.0 Flash due to its incredible speed. It analyzes uploaded images to generate structured JSON data describing specific pet traits. Database MongoDB via Beanie ODM stores our flexible document schemas. Storage AWS S3 is used to securely store and serve pet images.
Challenges we ran into
AI Consistency, getting the AI to consistently output structured tag data for matching was tricky. We had to iterate on our prompt engineering to ensure it handled edge cases gracefully. The Perfect Match Logic Defining what constitutes a "match" algorithmically was difficult. A strict match might miss a pet because of lighting differences, while a loose match creates too much noise. We developed a weighted scoring system based on species, breed, and color. Full Stack State management Coordinating the state between image uploads, asynchronous AI processing, and the final database write required careful handling.
What we learned
Building PawScout taught us the immense value of multimodal AI in practical applications. We learned that the "hard part" of AI apps isn't just calling the API, it is cleaning inputs, validating outputs, and designing a fallback experience when the AI is unsure.
Built With
- amazon-web-services
- atlas
- auth0
- beanie
- eslint
- fastapi
- framer-motion
- gemini
- httpx
- javascript
- lucide
- mongodb
- mongodb-atlas
- motor
- next.js
- pydantic
- python
- react
- s3
- tailwind
- typescript
- uvicorn
Log in or sign up for Devpost to join the conversation.