Inspiration
Injured animals often go unnoticed due to delayed reporting and lack of immediate veterinary access, especially in rural and roadside environments. After witnessing such situations firsthand, we wanted to create an AI system that could act instantly — detecting distress, understanding severity, and triggering help without waiting for human intervention. This motivation led to the creation of JEEV-RAKSHA AI, focused on saving lives through speed, intelligence, and automation.
What it does
JEEV-RAKSHA AI is a multimodal AI-powered animal rescue platform that detects injured or distressed animals using images or live camera input. It analyzes the animal’s condition, estimates injury severity, provides AI-driven veterinary guidance, and automatically notifies nearby NGOs, shelters, or emergency responders using real-time location data.
How we built it
We built the system using the Gemini 3 Multimodal API as the core reasoning engine. Computer vision models such as YOLOv8 and Vision Transformers (ViT) handle animal and injury detection. The frontend uses React, Vite, and Tailwind CSS, while the backend is powered by Node.js and Express. Rapid prototyping and deployment were done using Google AI Studio and GCP.
Challenges we ran into
Key challenges included handling varied animal postures, image quality issues, and minimizing false positives. Ensuring low-latency, real-time alerts while maintaining medical accuracy was also a significant challenge.
Accomplishments that we're proud of
Built a real-time, end-to-end AI rescue workflow
Successfully integrated multimodal AI beyond a simple chat interface
Designed an NGO-ready dashboard for real-world deployment
Created a socially impactful AI solution focused on animal welfare
What we learned
Through building JEEV-RAKSHA AI, we learned how powerful multimodal AI becomes when vision, reasoning, and real-world context are combined. We gained hands-on experience with integrating the Gemini 3 API for intelligent decision-making, handling noisy real-world data, and optimizing systems for low-latency responses. The project also taught us the importance of responsible AI, especially when providing health-related guidance, and how thoughtful prompt design and explainability improve trust and usability. Most importantly, we learned how technology can be designed not just for innovation, but for meaningful social impact.
What's next for JEEV-RAKSHA AI
Next, we aim to expand JEEV-RAKSHA AI with live CCTV and drone-based monitoring, multilingual voice support for wider accessibility, and deeper collaboration with NGOs and government animal welfare departments. We also plan to enhance medical accuracy through continuous learning, integrate wildlife rescue capabilities, and scale the platform for real-world deployment across cities and rural regions.
Built With
- cloud-hosted
- cloud-storage-explainable-ai-(xai):-attention-based-visual-explanations
- demo
- dqn)-frontend:-react
- email)-data-&-storage:-json-based-datasets
- express-cloud-&-ai-platform:-google-ai-studio
- geolocation-api
- google-cloud-platform-(gcp)-apis-&-integrations:-gemini-multimodal-api
- grad-cam-(conceptual)-deployment-&-prototyping:-ai-studio-apps
- isolation-forest)
- javascript
- maps-api
- notification-apis-(whatsapp
- opencv-ml-paradigms:-supervised-learning
- programming-languages:-typescript
- python-ai-/-machine-learning:-gemini-3-api
- pytorch-computer-vision:-yolov8
- reinforcement-learning-(ppo
- semi-supervised-learning
- sms
- tailwind-css-backend-&-services:-node.js
- tensorflow
- unsupervised-learning-(autoencoders
- vision-transformers-(vit)
- vite


Log in or sign up for Devpost to join the conversation.