Inspiration
More than 5 million Americans struggle with paralysis, making it challenging to complete even the most basic daily tasks, such as communicating, controlling their environment, or navigating public spaces. This made us realize how painful it can be to lose not just mobility, but independence. We were inspired to create a system that empowers individuals to reclaim autonomy while preserving dignity, using Snap Spectacles & AR, Fetch.ai agents, and Base44 to create a comprehensive virtual ecosystem and personal assistant that bridges technology with human capability.
What it does
We developed an interface on Snap Spectacle (AR Glasses) that displays overlays generated by multi-functional virtual agents and enables context-aware guidance through hand motion and voice control, depending on the severity of paralysis of the user. Together with this, we developed autonomous agents using Fetch.ai to handle communication between a patient and its surroundings in three different ways: 1) Real-time “send help” or “emergency” alerts (voice control & hand control), coupled with fall detection through Spectacles’ cameras, that would trigger automatic alerts sent to a caregiver or 911. 2) Environment control of smart home devices (specifically IoT-connected devices), so patients can use their voice to turn off lights, turn on the air-conditioner, and gain greater control over their homes. 3) Lastly, but most importantly, accessible navigation. We designed an agent through Fetch.ai that helps Snap Spectacle users navigate indoor and outdoor travel through accessible routes. We also designed a second agent through Fetch.ai that offers guidance to the nearest wheelchair friendly bathroom, elevators, and other accessible facilities; these original agents are now on Agentverse. Additionally, the data tracked by the AR glasses is also sent in real time to our Caregiver Website, which allows caregivers to receive real-time alerts, monitor navigation requests, control connected devices remotely, and review interaction history.
How we built it
MobiLens was built with using Lens Studio for the Snap Spectacles. We developed two uAgents that were deployed on the Agentverse and accessible by ASI:One to communicate with one another and find routes to the nearest accessible restrooms given a location. We then used voice recognition models to listen to the user and determine what they're asking for. This could be them asking for directions to the nearest restroom where we would separate the route into segments and display arrows directing the user towards the restroom with live updates to their location and a voice providing further directions. They could also be asking for assistance if they are in trouble or need help. This would send a message through a web socket to a server that would then send this message out to other devices connected on the same account to alert caretakers of the emergency as well as their location live through the caretaker website we developed using Base44.
Challenges we ran into
Building MobiLens wasn’t without obstacles and throughout the process of developing this project, we encountered many difficult challenges that we had to solve. First of all, we had to develop our voice-first optimization on hardware with limited resources while also ensuring that commands were reliably captured and processed which was quite a challenging task. Moreover, it also took us some time to figure out how to coordinate agents across domains with the goal of integrating communication, IoT control, and navigation into one seamless flow. Finally, we ran into some issues when trying to get navigation data that was accessible, since designing a coding agent that accurately identifies and routes to the nearest accessible bathrooms required careful data sourcing and mapping.
Accomplishments that we're proud of
Despite a tight deadline of 24 hours, we were successfully able to learn new technologies we haven't been able to experiment with before such as Snap AR and Fetch.AI. While there was a learning curve, we were able to push through and figure out how these technologies worked, ultimately completing what we set out to make from the start with minimal errors to our solution.
What we learned
Some of the valuable lessons we learned during this hackathon are deeply related with working with Augmented Reality. Specifically, we learned how to design voice-first AR interactions that remain intuitive and dignified for users with limited mobility, and also how to orchestrate multiple AI agents to enable context-aware decision-making in real time, and portray all that through a pair of AR Glasses. Finally, we also realized how crucial it is to balance independence for patients with peace of mind for caregivers.
What's next for MobiLens
MobiLens is designed as a multi-agent ecosystem, making it inherently scalable. In the near term, new agents can expand support to rehabilitation guidance, smart healthcare integration, and real-time emergency response. In the long term, the platform can serve broader accessibility needs, from elderly care to low-vision assistance, while partnering with healthcare providers, IoT companies, and mobility services.
Built With
- ar
- base44
- fetch.ai
- javascript
- lensstudio
- llms
- machine-learning
- ngrok
- python
- vercel




Log in or sign up for Devpost to join the conversation.