About the Project

The AI Mobility Assistant is designed to change how we move people safely and intelligently through critical environments — like hospitals, airports, and disaster zones.

Inspiration

Our idea sparked after watching footage from a recent airport crash. In those chaotic moments, emergency workers were rushing to move injured passengers, pushing stretchers across crowded, confused spaces. It was raw, urgent, and messy, not because anyone did anything wrong, but because there was no real-time support to help them move faster, safer, and smarter. We realized, "What if mobility systems could think for themselves? What if stretchers and wheelchairs could understand their surroundings, avoid obstacles, and respond to voice commands instantly even offline?" That moment planted the seed for the AI Mobility Assistant.

What it does

The AI-Powered Mobility Assistant uses VSLAM to map and navigate complex environments like hospitals, in real-time. It understands and responds to natural voice commands through a locally hosted Large Language Model (LLM), ensuring fast, private, and reliable interaction without needing the internet. Whether it's autonomously carrying a stretcher to a room, avoiding obstacles, or assisting emergency workers, it’s designed to make movement safer, smarter, and more human-centered.

How we built it

We poured our hearts and minds into building this system, combining several key technologies. We equipped the assistant with depth cameras to "see" the environment like human eyes. This allows it to perceive the world in three dimensions, understanding distances and spatial relationships. Using SLAM algorithm, the assistant builds and updates maps in real-time, even in changing, crowded spaces. This is like giving it the ability to create a mental map of its surroundings and update that map as things change. Through the APIs, we enabled the system to understand voice commands without needing an internet connection — ensuring both privacy and instant response. This means it can understand and respond to you even when there's no network available, which is crucial in emergencies. All computation happens locally, on the device itself. This makes it incredibly reliable, especially in disaster zones with no network infrastructure. It's designed to work when you need it most.

Challenges we ran into

We faced some tough challenges along the way, but we were determined to overcome them like optimizing for Edge Devices i.e., balancing the need for powerful vision, SLAM, and LLM processing with the limited resources of small, portable hardware. It was like trying to fit a powerful engine into a compact car.

Accomplishments that we're proud of

Building a system not just for technology’s sake, but to make a real difference when it matters most — during emergencies and critical patient care moments.

What we learned

This project taught us some invaluable lessons including, A few seconds saved during patient transport could mean a life saved. This project isn't just about technology; it's about making a difference in people's lives. Critical environments need systems that work anytime, anywhere — even without cloud access. Reliability is paramount. Technology must assist, not overwhelm, people, especially during chaotic moments. It's about creating tools that are intuitive and easy to use when you're under pressure.

What's next for The AI-Powered Mobility Assistant

Adding touchscreen controls and gesture recognition alongside voice commands.

Built With

Share this project:

Updates