Currently, in war-torn and disaster-struck areas, first responders are risking their lives unnecessarily, as they lack the resources needed to accurately and safely assess a disaster zone. By using robotics, we can prevent the risk of human life.

What it does

The SPOT robot has been enabled to be an independent rescue machine that understands human emotion and natural language using AI with a noted ability to detect a language and adjust output as such.

How we built it

We built it using a variety of tools including Hume for AI/transcription, OpenCV for ML Models, Flask for the backend, and Next.js for the frontend.

Challenges we ran into

Connecting to and controlling SPOT was extremely difficult. We got around this by building a custom control server that connects directly to SPOT and controls its motors. The Hume API was relatively friendly to use and we connected this to a live stream of data via the Continuity Camera.

Accomplishments that we're proud of

Fixing SPOT's internal linux dependencies. This is something that blocked all teams from using SPOT and took up most of the first day. But by solving this, we enabled SPOT to be used by all teams.

What we learned

We learned it is quite complex to combine various tech stacks across a variety of products both hardware and software. We learned to approach these problems by introducing levels of abstraction that would allow parts of the team to work parallely.

What's next for Spotter - Revolutionizing Disaster Relief

We hope to fully autonomize SPOTTER so that SPOT can traverse and navigate disaster environments completely independently. In this way, SPOT can locate survivors and assess the situation globally.

Built With

  • bun
  • flask
  • hume
  • next
  • openai
  • spot
Share this project: