Inspiration
We built PetPulse after realizing that most “pet cameras” only show video — they don’t explain what’s happening. Pet owners often ask:
- Is my pet sleeping normally or acting unusual?
- Did they eat today? Are they stressed?
- What changed while I was away?
We wanted a system that doesn’t just record, but understands. Our goal was to create a friendly, intelligent tracker that can translate motion, sound, and visuals into meaningful, human-readable events — and make them easy to share with vets and a community.
What it does
PetPulse helps users monitor, understand, and document their pet’s behavior.
Core features
Pet Profile
- Name, age, gender, breed info
- QR code to share a pet profile (useful for caretakers or vets)
Audio Analysis
- Upload or record audio (e.g., barking or whining)
- AI summarizes cues and possible context
Media Upload
- Users can upload pet photos/videos for analysis and logging
- Upload a photo → AI returns likely breed(s) (optional “breed mix” style output)
Camera / Streaming (prototype-ready / extensible)
- Supports an architecture extensible to RTSP streaming for real-time monitoring across multiple cameras
- Users can add/remove camera sources (e.g., RTSP URLs) as the project evolves
Health & Routine Insights
- Interprets events into structured fields:
- Condition: Sleep / Eating / Feelings
- Status: Age / Breed / Gender
- Event: Timestamp + summary of what happened
Vet History & Records
- Vet visits: concerns, reason, date, notes
- Upload medical records for centralized history
Community
- Post pet updates/photos and interact with other users
Notifications
- If an event is flagged, PetPulse sends a message to the user’s contact
How we built it
High-level architecture (end-to-end)
- Sensor detects movement pattern changes (sleep/rest/walk/run)
- When an event triggers, the device captures:
- An image (snapshot)
- ~5 seconds of audio
- Files are uploaded to cloud storage (DigitalOcean) and recorded in a database
- The backend sends the audio/image to Gemini for AI interpretation
- Gemini returns structured text insights (condition/status/event)
- Results are stored and displayed in the web app dashboard
- If flagged, a Slack webhook sends a real-time notification
Frontend
Built as a modern React app (Vite) with clean UI/UX:
- Landing, authentication, dashboard
- Breed Finder page (image upload)
- Audio page (record/upload)
- Camera page (video upload / extensible to RTSP)
- Settings and account management
Backend
Python FastAPI service with API documentation and a PostgreSQL database.
Stores:
- User and pet profiles
- Media metadata (audio/image/video)
- AI results
- Vet visit history and medical uploads (when enabled)
Detection + Flagging Logic
- Scoring-based flag algorithm determines when alerts should be triggered
Challenges we ran into
- Connecting the Arduino to the computer using a private network
- Developing an algorithm to detect changes in sensor movement patterns
- Increased integration complexity due to the number of technologies used
- Streaming video from camera to the website required additional tools (e.g., Twitch), since Logitech cameras do not natively stream
Accomplishments that we're proud of
- Built a complete AI-driven workflow: upload → AI analysis → structured insight → dashboard display
- Implemented Breed Finder integration with a clean, accessible UI
- Designed a scalable event + notification framework (flag algorithm + alerts)
- Created a foundation for pet health history with vet visits and record uploads
- Defined a clear hardware path using accessible components (Arduino, accelerometer, camera, audio)
What we learned
- AI is most useful when its output is structured, not just descriptive
- Consistent fields (Condition / Status / Event) make the product feel reliable
- Training AI on pet behavior data improves accuracy
- Sensor-driven systems require strong reliability engineering (uploads, retries, storage), not just ML
- Kafka can help stabilize real-time data streaming
- Vibe coding still requires solid software fundamentals for long-term integration
- DigitalOcean is comparable to AWS but more user-friendly
What's next for PetPulse
Personalized pet baselines
- Learn per-pet behavior patterns over time to reduce false positives
Milestones & routine tracking
- Sleep hours
- Meal times
- Activity levels
- Behavior shifts
Vet collaboration
- Share event summaries and history with vets
- Allow vet-side notes
Model upgrades
- Evaluate training/fine-tuning:
- YOLOv8 (vision) for posture and activity cues
- VGGish (audio embeddings) for sound events
More notification channels
- Email, SMS, and push notifications after prototype stabilization
Stronger community features
- Posts, comments, tags
- "Similar pets" recommendations
Premium features
- Extra features for power users 😈😈
Built With
- 3-axis-accelerometer-sensor
- arduino
- cloudflare
- digitalocean
- fastapi
- firebaseouth
- gemini
- huggingface
- javascript
- logi101brio
- postgresql
- python
- react
- rest
- twitch
- webcam/camera-module

Log in or sign up for Devpost to join the conversation.