Inspiration:

Every year, hikers and campers encounter unfamiliar wildlife and don't know if they're safe. A rustling in the bushes, an unknown call at night, these moments can be dangerous without the right knowledge. We wanted to build a tool that gives anyone instant, AI-powered wildlife awareness, no field guide or biology degree required.

What it does:

StrangerDanger identifies any animal from a photo, sound, video, or live camera feed and instantly tells you its threat level (Safe, Caution, or Danger) with survival guidance. It also features: AI Survival Simulator: branching "what would you do?" scenarios for dangerous encounters Sound Training: learn to recognize animal calls with AI-generated audio quizzes AR Field Scanner: point your camera at a habitat and get a GPS-aware wildlife briefing Community Nearby Feed: see what others have spotted near your location in real time Pokédex-style Field Guide: collect and track every species you've identified

How we built it:

Frontend: React 18 + TypeScript + Tailwind CSS, fully mobile-responsive Backend: Supabase Edge Functions (Deno) for serverless AI orchestration AI: Google Gemini 1.5 Flash for multimodal species identification (vision, audio, text) with structured tool calling Audio: ElevenLabs API for generating realistic animal sound effects Database: PostgreSQL with Row-Level Security for community sightings Deployment: Lovable Cloud with automatic edge function deployment

Challenges we ran into

Multimodal input handling: supporting photo, audio, video, and live camera each required different preprocessing pipelines (base64 encoding, frame extraction, MIME type handling) Rate limiting without auth: we needed to protect the API from abuse while keeping the app fully anonymous, so we built IP-based rate limiting at the edge function layer Privacy vs. functionality: storing IP addresses for rate limiting while keeping them hidden from the public feed required building a database view that acts as a column-level security boundary Large base64 thumbnails: storing image previews directly in the database caused query timeouts, teaching us about the tradeoffs of inline storage vs. object storage

Accomplishments that we're proud of

12 features shipped in a fully working production app, not a prototype Zero training data needed, by using Gemini's multimodal capabilities instead of custom CNNs, we skipped weeks of dataset collection and model training Works on any species, not limited to 10 animals like a traditional classifier Fully anonymous, no sign-up required, anyone can use it immediately Real security, RLS policies, privacy-preserving views, server-side rate limiting

What we learned

Multimodal foundation models (Gemini) can replace entire ML pipelines, image classifiers, audio classifiers, and threat databases, with a single API call Edge functions are a powerful pattern for keeping API keys server-side while maintaining a serverless architecture Database views are an underrated tool for column-level security in PostgreSQL Building for mobile-first forces better UX decisions across the board

What's next for StrangerDanger

Cloud storage for thumbnails, move images out of the database for faster feed loading Offline mode, cache recent identifications for areas without cell service User accounts, let users save their field guide across devices Native camera integration, deeper OS integration for faster live scanning Regional wildlife alerts, push notifications when dangerous species are reported nearby

Built With

  • ai
  • audio
  • cloud
  • elevenlabs
  • fully-mobile-responsive-backend:-supabase-edge-functions-(deno)-for-serverless-ai-orchestration-ai:-google-gemini-1.5-flash-for-multimodal-species-identification-(vision
  • gemini
  • lovable
  • postgresql
  • react
  • supabase
  • tailwind
  • typescript
Share this project:

Updates