The Problem We Set Out To Solve

Every year, thousands of people die in the rubble of collapsed buildings — not because rescuers don't care, but because they can't find them in time.

The numbers are brutal. After an earthquake, survival rate is 91% in the first 30 minutes. By 72 hours, it drops to 36.7%. That window is everything. And right now, first responders are spending those critical hours doing something that should terrify all of us — walking blindly into unstable, collapsed structures, with no real-time information about where survivors are, whether the structure is about to collapse again, or whether the air is even safe to breathe.

Current search methods — acoustic detectors, trained dogs, manual searches — have a critical flaw that nobody talks about: they cannot detect unconscious victims. If a person is trapped and unconscious, they make no sound. They can't tap a pipe. They can't call for help. And so they wait, invisible, while the clock runs out.

We built PulseBot because we believe the first step to saving lives in a disaster is making sure the people doing the saving don't have to risk theirs to find out where to look.

What We Built

PulseBot is an autonomous unmanned ground vehicle designed to enter collapsed structures before human responders do. It navigates rubble voids — the accessible gaps and openings inside collapsed buildings — using a multi-sensor suite to detect signs of life and report findings in real time to an operator safely outside.

The rover operates on two parallel processing loops running simultaneously on an ESP32-S3 microcontroller. A fast safety loop running every 50ms handles obstacle avoidance using three ultrasonic sensors and monitors structural stability using an MPU6050 IMU — if the robot tips dangerously or detects a sudden vibration spike consistent with an aftershock, it stops immediately and fires an alert. A slower detection loop running every 2 seconds reads the environment for signs of life — CO2 accumulation from human breath, thermal heat signatures, sound, gas leaks, temperature, and humidity.

What makes PulseBot genuinely different from a remote-controlled toy is its confidence scoring system. No single sensor triggers a confirmed survivor alert. CO2 above threshold scores one point. A thermal heat signature above 30°C scores another. Sound detection scores a third. Only when multiple sensors agree does PulseBot lock its position and fire a full alarm. This eliminates false positives — the kind that send responders into danger unnecessarily — and ensures that when PulseBot says there's someone there, there's someone there.

The rover is controlled via a custom-built ESP-NOW handheld controller featuring a joystick, OLED display, and mode toggle button. A short press switches the display between live sensor telemetry and the 8x8 thermal grid. A long press toggles the rover between manual joystick control and fully autonomous wall-following exploration mode. In autonomous mode, PulseBot uses omni-directional strafing to navigate smoothly around obstacles without stopping to turn — making it significantly more effective in tight rubble environments than a traditional differential drive rover.

How We Built It

The hardware stack centers on two ESP32-S3-WROOM-1 microcontrollers — one on the rover, one in the handheld controller — communicating over ESP-NOW with sub-5ms latency and no router required. The rover runs FreeRTOS with two independent tasks at different priorities, ensuring the safety loop always takes precedence over the detection loop regardless of sensor processing load.

The sensor suite includes an AMG8833 8x8 thermal camera, MH-Z19B CO2 sensor, MPU6050 6-axis IMU, three HC-SR04 ultrasonic sensors, MQ-2 gas sensor, KY-038 microphone, DHT11 temperature and humidity sensor, and ESP32-CAM for live video streaming. Four DC motors with omni-directional wheels are driven by two L298N motor drivers, powered by an OVONIC 7.4V 5200mAh 80C LiPo battery with the L298N's onboard 5V regulator powering all electronics — no additional buck converters required.

The controller features a 128x64 SSD1306 OLED display that renders live telemetry from the rover including temperature, humidity, gas readings, ultrasonic distances, IMU tilt data, and a visual 8x8 thermal heatmap — all transmitted wirelessly in real time via a custom ESP-NOW telemetry packet.

The chassis was built and wired from scratch by the team during the hackathon, including custom power distribution, sensor mounting, and mast assembly for the camera and thermal sensor array.

The Challenges We Faced

Getting four omni wheels to move correctly was harder than expected. The direction mixing for omni-drive is mathematically straightforward on paper but physically depends entirely on how the wheels are oriented on the chassis — and ours needed custom sign corrections that we could only discover through systematic motor-by-motor testing. We built a structured test harness that isolated each motor individually, identified which were wired in reverse relative to our coordinate system, and corrected the mixing matrix accordingly.

ESP-NOW callback signatures changed in ESP32 Arduino core 3.x — the onDataRecv function now takes an esp_now_recv_info struct instead of a raw MAC address pointer. Every example online uses the old signature. We hit this error, tracked it down in the ESP-NOW header files, and updated both sketches accordingly.

Joystick calibration was another real issue. The physical center of a KY-023 joystick module outputs approximately 2702/2823 on a 12-bit ADC — not the expected 2048. We wrote a calibration routine that reads the true center at boot, applies a deadzone, and maps the asymmetric range correctly to -255/+255 on each axis.

Power architecture was a careful balance. The ESP32-CAM draws up to 310mA during active WiFi streaming, which combined with all other electronics pushed us close to the L298N regulator's 1A limit. We split the load across both L298N boards — one powers the main ESP32 and all sensors, the other powers the ESP32-CAM exclusively — keeping both rails well within safe limits.

What We're Proud Of

We're most proud of the confidence scoring system. It would have been easy to build a rover that beeps when one sensor spikes. Instead we built a system that requires corroborating evidence from multiple independent sensors before firing an alert — the same logical framework a real rescue team uses when assessing a scene. That design decision makes PulseBot not just a demo, but a genuinely defensible approach to a real problem.

We're also proud of the autonomous navigation. Watching PulseBot explore a space on its own — strafing smoothly around obstacles, slowing when CO2 rises, locking position when thermal and sound agree — and doing it all without a human at the controls, felt like the moment the project became real.

What We Learned

We learned that hardware integration is where projects live or die. Every individual component worked perfectly in isolation. Putting them together — shared I2C buses, FreeRTOS task priorities, power rail limits, ESP-NOW packet timing — revealed interactions that no amount of planning fully anticipates. The only way through is systematic testing, clear separation of concerns in the code, and a team that communicates clearly about what's working and what isn't.

We also learned that the best hackathon projects solve a problem so specific and so real that the demo tells the story by itself. We didn't need to explain why PulseBot matters. When the LED went red and the buzzer fired, everyone in the room understood immediately.

What's Next

PulseBot in its current form is a proof of concept. The path to a real deployment-ready system involves replacing the ESP32-CAM with a higher resolution streaming camera, integrating a proper CO2 sensor with UART output for more accurate readings, adding a GPS module for outdoor deployment and coordinate broadcasting, upgrading to tank tracks for genuine rubble traversal, and building a proper web dashboard that aggregates all telemetry into a unified operator interface.

The confidence scoring system, the dual-loop FreeRTOS architecture, and the ESP-NOW control protocol are all production-ready concepts that would survive the transition to a more capable hardware platform. The foundation is solid. The direction is clear. And somewhere out there is a person trapped in rubble who would be found faster because of what we built this weekend.

Built With

  • 7.4v
  • adafruit
  • amg8833
  • arduino
  • c++
  • dht11
  • esp-now
  • esp32-cam
  • esp32-s3-wroom-1
  • freertos
  • hc-sr04
  • ide
  • ky-038
  • l298n
  • lipo
  • mh-z19b
  • mpu6050
  • mq-2
  • oled
  • omni-directional
  • ovonic
  • ssd1306
  • wheels
Share this project:

Updates