-
-
Over 300 million people are visually impaired.
-
They need help from people to perform even basic tasks.
-
-
By combining robotics, advanced sensing, and generative AI, Morseray transforms accessibility into intelligent mobility.
-
Morseray emits invisible rays and calculates the Time-of-Flight to accurately map the user's surroundings.
Inspiration
The idea for Morse Ray originated while working on a robotics project that involved distance detection and object awareness. We became interested in how robots sense their surroundings using signal emission and return timing, and we began exploring how a similar concept could be adapted for human assistance. This led us to think about visually impaired individuals, who must constantly rely on external tools or other people to navigate spaces safely. Morse Ray was inspired by the question: what if a wearable device could sense the environment ahead and communicate danger before contact occurs?
What it does
Morse Ray is a conceptual smart assistive system designed as wearable spectacles for visually impaired users. The idea is that the device would emit distance-sensing signals forward, detect how close surrounding objects are, and determine whether they pose a potential danger. If an object comes too close, the system would respond by vibrating to alert the user. In addition, an AI voice agent powered by ElevenLabs would guide the user with simple spoken instructions such as warnings or navigation cues.
How we built it
This project is currently at the concept and system-design stage. The architecture is designed around an Arduino or ESP32 microcontroller connected to distance sensors such as ultrasonic or IR modules. The microcontroller would calculate distance using time-of-flight principles and classify proximity levels. Vibration motors would provide immediate feedback when objects are detected within unsafe ranges. Sensor data would be sent to a cloud backend such as Firebase, which would trigger an AI voice agent to generate guidance using ElevenLabs. No physical prototype has been built yet; the work so far focuses on design, logic flow, and feasibility.
Challenges we ran into
One of the main challenges was ensuring the concept remained simple and intuitive for users while still being technically effective. We also had to carefully think through when the system should vibrate versus when voice guidance should activate, so that alerts do not become overwhelming. Another challenge was designing a system that balances on-device responsiveness with cloud-based intelligence, especially considering connectivity limitations.
Accomplishments that we're proud of
We are proud of developing a clear, feasible system architecture grounded in real robotics principles rather than abstract ideas. The project demonstrates how a concept from robotic sensing can be adapted into a human-centered assistive technology. We also took care to design the system ethically, prioritizing safety, accessibility, and honest representation of what has and has not been built.
What we learned
Through this project, we learned how robotic perception concepts such as distance sensing and signal reflection can be translated into assistive technology ideas. We also gained insight into the importance of transparency in engineering projects and the need to design systems that prioritize user comfort, clarity, and safety from the very beginning.
What's next for Morse Ray
The next step for Morse Ray would be building a basic hardware prototype to test distance sensing and vibration feedback. After validating the core interaction, we would integrate cloud connectivity and AI voice guidance. Future iterations could explore multi-directional sensing, object classification, and further miniaturization to make the device practical for everyday use.

Log in or sign up for Devpost to join the conversation.