Inspiration

We were saddened by the amount of war-torn communities around the world, from Palestine, Syria, Ukraine, Sudan, and more. We wanted to build something that could make real world change in reducing the amount of lives lost or help in investigation of these atrocities. We decided on a rover bot, which would help aid workers search for injured civilians in dangerous areas where human rescuers cannot safely operate.

What it does

The SAGE bot is an autonomous rescue system designed to travel through dangerous conflict zones, scanning for injured civilians and coordinating rescue operations. The bot operates in three phases: Phase 1 - Detection & Movement: The bot spins and uses AI-powered computer vision (Gemini API) to scan its surroundings for survivors. When a person is detected and centered in the camera frame, the bot automatically stops spinning and moves toward the target. Phase 2 - Medical Assessment: Once positioned near a survivor, the bot captures detailed video footage and uses TwelveLabs' advanced video understanding AI to analyze the person's medical condition, identify visible injuries, and assess the severity of their situation. Phase 3 - Rescue Coordination: The bot transmits real-time survivor alerts, medical analysis, and live camera feeds to a centralized rescue dashboard, allowing emergency teams to coordinate immediate response and prioritize critical cases. The system includes a comprehensive web-based dashboard where rescue teams can monitor multiple bots simultaneously, receive urgent survivor alerts, view live camera feeds, and access detailed medical assessments to make informed rescue decisions.

How we built it

AI-Powered Vision System:

  • Gemini API for real-time survivor detection and positioning analysis
  • TwelveLabs for advanced medical condition assessment and injury analysis
  • Custom computer vision pipeline processing BMP image frames for consistency across both AI systems

Robot Control System:

  • QNX-based Raspberry Pi for autonomous movement and motor control
  • HTTP-based communication between detection system and robot hardware
  • Automatic transition from scanning mode to target approach mode

Real-Time Dashboard:

  • Flask/SocketIO web server for rescue team coordination
  • SQLite database for mission logging and survivor data storage
  • WebSocket connections for real-time alerts and live camera streaming
  • Responsive web interface displaying mission status, survivor alerts, and medical analysis

System Architecture:

  • Modular Python codebase with separate detection, analysis, and coordination components
  • Network-based communication between field bot and base station dashboard
  • Automatic file organization and cleanup for efficient operation

Key Technologies:

  • Python with OpenCV for camera handling and image processing
  • Google Generative AI (Gemini) for survivor detection
  • TwelveLabs API for medical video analysis
  • Flask with SocketIO for real-time web dashboard
  • HTTP/WebSocket protocols for bot-to-dashboard communication

Challenges we ran into

  • QNX Video Capture Issues: QNX had compatibility problems with Python video capture libraries → Solved by implementing high-frequency BMP image capture instead of continuous video streams
  • Raspberry Pi Linux Boot Problems: Hardware issues prevented stable Linux operation → Successfully integrated QNX system for robot control while maintaining Python AI capabilities
  • Location Services Limitations: Apple AirTag integration was blocked by proprietary restrictions → Focused on camera-based positioning and coordinate transmission instead
  • Dual-Camera Complexity: Managing ESP32 camera stream alongside main camera created network and synchronization issues → Streamlined to single camera system used sequentially by both AI systems
  • Real-Time Data Synchronization: Ensuring survivor alerts and camera feeds reached rescue teams instantly required careful WebSocket implementation and error handling

Accomplishments that we're proud of

  • QNX Integration Success: One of the few teams to successfully work around QNX's Python limitations while maintaining advanced AI capabilities
  • Seamless AI Integration: Successfully combined two different AI systems (Gemini + TwelveLabs) working with the same camera input for comprehensive survivor analysis
  • Real-Time Rescue Coordination: Built a fully functional dashboard system enabling multiple rescue teams to monitor bot operations and coordinate emergency response
  • Autonomous Decision Making: Created a system where the bot independently detects survivors, makes movement decisions, and escalates to human operators only when necessary
  • Production-Ready Architecture: Developed a scalable system that could realistically be deployed in actual emergency scenarios with proper hardware integration

What we learned

  • Adaptive Problem Solving: When faced with hardware and framework limitations, we learned to quickly pivot and find creative workarounds rather than getting stuck on initial approaches
  • API Integration Mastery: Gained deep experience combining multiple AI services and managing their different data formats and response patterns
  • Real-Time Systems Design: Learned the complexities of building systems that must respond immediately to critical situations while maintaining reliability
  • Cross-Platform Development: Navigated the challenges of developing for different operating systems (QNX, Linux) and hardware platforms (Raspberry Pi, ESP32)
  • Emergency Systems Thinking: Understood the critical importance of fail-safes, error handling, and redundancy when building systems for life-or-death scenarios

What's next for SAGE - Search And General Emergency Bot

  • GPS Integration: Implement precise location tracking for each survivor found, enabling rescue teams to navigate directly to confirmed positions
  • Enhanced Mobility: Improve autonomous navigation through rough terrain with better obstacle avoidance and all-terrain movement capabilities
  • Advanced Medical AI: Expand injury assessment capabilities to provide specific medical recommendations and triage prioritization
  • Multi-Bot Coordination: Enable multiple SAGE bots to work together, sharing discovered locations and coordinating search patterns
  • Supply Drop Mechanism: Integrate robotic arm functionality to deliver emergency medical supplies, water, and communication devices to survivors
  • Satellite Communication: Add satellite connectivity for operation in areas with destroyed communication infrastructure
  • Weather Resilience: Enhance durability for operation in extreme weather conditions common in conflict zones
  • Multi-Language Support: Implement voice communication capabilities to interact with survivors in multiple languages
Share this project:

Updates