Inspiration
Hospital hallways are rigid, repetitive, and exhausting to navigate, especially for patients with limited mobility or staff managing multiple floors. We asked: what if a wheelchair could just know where to go, and without the need of an actual wheelchair?
What it does
Omni-Assist transforms any standard chair into a self-navigating vehicle, through an adjustable ratchet system. Users simply speak natural language commands like "take me to room 204" or "go to the nurses' station." The chair then takes over to handle obstacle avoidance, real-time mapping, and a smooth arrival, providing spoken confirmation along the entire route.
Best of all, the entire hardware stack costs under $200. Off-the-shelf DC motors, commodity H-bridge drivers, a $10 Arduino Mega, a standard USB LiDAR, and lots of scrap metal, no proprietary robotics hardware required. This makes it realistic to deploy at scale across a hospital fleet at a cheap cost compared to market-standard powered wheelchairs, which can cost upwards of $2000 USD.
How we built it
- Voice loop: ElevenLabs STT + TTS for natural, low-latency interaction
- LLM command parser: Translates spoken intent into navigation goals grounded in a live landmark database
- SLAM: 2D LiDAR builds and refines an occupancy grid map in real time
- Spatial memory: 4-layer persistent map: static obstacles, visual landmarks, dynamic traffic priors, and learned traversal cost, saved across sessions
- Path planner: A* global planning + Dynamic Window Approach local obstacle avoidance
- Motor control: Arduino Mega drives 4 mecanum wheels via VNH5019 H-bridges, with a hardware e-stop and serial watchdog for safety
- Perception: Webcams for visual odometry and landmark recognition; optional phone IMU for heading correction -Hardware All of our hardware was built from the ground up at YHacks. We brought aluminium extrusions, mecanum wheels, electronic parts, and
Why hospitals
Hospital environments are uniquely well-suited for this approach. Floors are structured, corridors are consistent, and room numbering is predictable, which means SLAM converges fast and landmark grounding is reliable. The system's spatial memory gets better the more it navigates a building, making it ideal for a setting where routes repeat daily. It reduces burden on staff, gives mobility-impaired patients independence, and requires no infrastructure changes, no beacons, no floor markers, no building modifications.
And because the hardware is so cheap, a hospital could outfit an entire floor for less than the cost of a single powered wheelchair.
Challenges
Fusing LiDAR, and wheel odometry into a coherent pose estimate without ROS2 was the hardest part. We built a custom EKF that weights sensors by confidence, LiDAR dominates in rooms, cameras fill in featureless hallways. Getting smooth, jerk-limited motion that's actually comfortable for a seated passenger required careful tuning of the velocity smoother.
On the hardware side, working with scrap metal was a challenge in itself. Many of the holes did not line up with the parts we needed to mount, so we constantly had to improvise brackets, spacers, and custom prints to make everything fit securely. A lot of the build came down to adapting imperfect materials into a frame that was still rigid, reliable, and safe enough to transport a person. Power delivery was another big challenge. Driving four motors at once meant we had to think carefully about current draw, wiring reliability, heat, and safe emergency shutdown. A system like this is not just about making the motors spin, it is about making sure they respond consistently under load, stop safely, and do not introduce risk to the rider. Integrating the Arduino Mega, motor drivers, and etc into one dependable system took a lot of iteration. What we learned about the hardware
Accomplishments
Real-time SLAM with no ROS dependency. A full voice-to-motion pipeline from spoken command to physical navigation. A persistent spatial memory that genuinely improves across trips. And a safety layer that cannot be overridden by software.
What we learned
Mecanum kinematics are deceptively simple until you're debugging which roller pattern your wheels actually use. Also: pre-caching TTS audio for common phrases like "On my way" cuts perceived latency dramatically.
What's next
Multi-floor navigation via elevator integration. Patient-specific profiles (preferred speed, saved destinations). Integration with hospital room management systems so the chair always knows current occupancy.
We're also exploring a robotic arm attachment for assisted reach tasks like grabbing items, pressing elevator buttons, or handing off objects to patients. VIAM makes this dramatically easier: its modular component model lets you drop in arm control, configure kinematics, and expose it through the same resource graph as the drive base, without rebuilding the software stack. VIAM's cloud-managed fleet support also pairs perfectly with a hospital deployment scenario where you'd be managing dozens of chairs remotely.
Log in or sign up for Devpost to join the conversation.