The inspiration behind cAIne
If you have someone next to you, look at them. Are they wearing glasses, or are you? Chances are that's the case. Approximately sixty-five percent of the world has some form of vision correction, and half of those individuals have very low-vision, and half of that population has no vision. XR provides a unique opportunity of an evolved form of vision correction, mixing Context-Aware AI and haptic feedback to create a new cane - a cAIne. Using the Quest's passthrough and camera access, we can scan an environment in real time, as well as with the Meta RayBans. This calculates distances and providing instant responses for the end user, and provides an intuitive gesture based UI for hand tracking, or vibrational feedback for controller use, allowing a user to decide which feels more comfortable.
Challenges we ran into
Figuring out the right AIs to work with our project and getting them to mesh in the way we wanted was difficult. This was made all the more difficult as not all AIs that we were going to utilize are available at this time, and we had to use substitutions. The time constraints made us limit our scope and cut features we would have loved to include. Another challenge was finding audio distinct enough and yet not too sharp for the cAIne's feedback.
Accomplishments that we're proud of
Wrangling together various image and spatial recognition AIs was an exhilarating challenge. We've managed to run a local Yolo model directly on the Quest 3, providing very fast response. With a small team, succeeding in making the cAIne register these features and objects was a rewarding endeavor.
What's next for cAIne
As this software's goal is to be assisting low-vision/Blind individuals, our next stage goal is to implement image-to-speech for real time navigation. We will also fine-tune the way that distance is represented by the cAIne and improve our audiography. Another thing will be to add more pre-set objects that our AI can identify, increasing from 80 immediate recognition objects to possibly hundreds. A major future goal would be to implement more wearable technology, such as glove controllers with a larger array of feedback for both mobility, ease of access, and combining the current trade off that the controller vs hand tracking has, thus negating the issue.

Log in or sign up for Devpost to join the conversation.