Inspiration

Jule 26th, 2021. There was a building in Miami that suddenly collapsed. Some residents get caught in the debris and don’t make it. Others are trapped, confused about what is going on, and need help ASAP.

This is when search and rescue operations come underway. First responders need to dig out potential victims. But where do they dig? And will digging in the wrong spot cause collateral elsewhere?

This is where our autonomous search and rescue bot, RescueR, comes in.

Unlike a human, RescueR is tiny and can navigate crevices and debris. Using it’s smart technology, it can locate victims and help triage first responders to where they’re needed.

What it does

RescueR uses a variety of technology for assisting first responders in triaging victims. Here’s how it works:

A first responder uses an IPAD, phone, or laptop. They gain access to our mission control software, which grants control of our robot.

This mission control sports the following features:

  • Ability to live connect using a webcam to a potential victim to determine victim’s status once found
  • Ability to move the robot manually, like a video game controller
  • Ability to check sensors readings (smoke, co2, heat) to ensure safety of first responders triage
  • A mini map of where victims are located. This is done using two microphones on the bot (binaural audio) to triangulate where someone is calling for help.

How I built it

We have 4 core components to our infrastructure.

  • The frontend (FE) - this is the mission control for the first responder
  • The Robot (ROBOT) - this is built the physical device that does the search & rescue operations
  • The backend (BE) - this connects the ROBO to FE and all the assistive telecommunication technologies as well
  • The database (DB) - this stores data related to the robot

The general principle is the BE hooks up everything else. It’s the “brains” of the operation so to speak. It’s built using a backend framework called NestJS (Node/Typescript).

The ROBOT collects data via it’s onboard sensors. It uses a variety of temperature, co2, smoke, and audio sensors to detect and map data. This is all interfaced via the onboard arduino board. From here, it pools data to it’s onboard raspberry pi-server which stores data directly to our remote database using mongoDB.

The FE connects via WebRTC using AggroAI, which connects directly to the robot. This is how we get face to face communication. It also sends network requests to the ROBOT through the BE, such as whether the “back”, “forward”, “left”, “right” keys are pressed. We store this data in the DB at the same time to help triangulate audio sources later

The BE interfaces all of our telecommunication APIs. We’re not limited to our frontend in controlling the robot. We can use SMS commands as well, to control and announciate text-to-speech to potential victims as well.

We also integrate 2FA (2 factor authentication) through here too. This is to make sure the robot is controlled by a valid first responder.

The BE also does some behind the calculations using a ML model. This is what determines where a victim is located relative to the robot using triangulated audio sources. We use an X,Y grid coordinate system based on the robots approximate location to map this on the frontend. This gives first responders a better idea of where to look for a victim through debris.

When a potential victim is found, we take the live video feed and run it through a separate service for live-transcribing what the victim is saying. This is useful for analyzing reports post mortem, or assist in real-time language translations

We use the following telecommunication APIs as the following:

  • Telnyx & Jambonz - Commanding Robot over Text, 2FA
  • Symbl.ai - Real time transcription
  • Subspace - Low latency data-transfer b/w Robot & server
  • AWA-Network
  • Agora.io - WebRTC (Video feed from the Robot)

Challenges I ran into

We ran into hardware challenges. When we wired up our ROBOT dog, we hadn’t considered how the wires would impact locomotion on the servos. It could move its legs, but it didn’t move them properly. So it didn’t properly, but hey at least they moved when commanded

We had some issues with the backend connecting to the database. Apparently it silently failed when we forgot to specify the right database name 🤦‍♂️.

Determine the overall engineering architecture was challenging. There’s so many ways to hook up a system like this, we chose the easiest version of it. We documented all of our API specs in swagger ahead of time which avoided alot of headache

We also ran into some issues hooking up some of the telecommunication APIs *Munster inserts some more stuff about the ROBO and BE here too, or modifies information as necessary

Accomplishments that I'm proud of

We’re proud of building a working prototype in just 2 days. Most of us have worked together before through hackathons, so we played to each others strengths as best as possible. Everyone played a vital role in building this MVP.

What I learned

These are some of the things we learned on the team.

  • Vincent learned how to write backend APIs in NestJS, a framework built in Typescript/Node. It helps to quickly deploy MVPs such as this, and autogenerate swagger docs on the fly.
  • Davendra learned how to integrate with speech to text APIs in python.
  • Amy learned how to craft well documented presentations and project management.
  • Chris learned how to build a physical robot
  • Ebtesam learned how to work with multiple telecommunication APIs
  • Doug learned how to integrate webRTC and AggroAI through a React frontend
  • Muntaser learned how to write ML models for locating users through sound

What's next for RescueR

RescueR next stage is to refine the initial prototype, improve hardware integrations, and do stress/field testing

We also won $4100 through TADHacks for this project. Read about it here: https://blog.tadhack.com/2021/09/26/tadhack-global-2021-summary/

Share this project:

Updates