Inspiration
Our teammate Victor lives in Ciudad Juarez, Mexico, and has always wondered why eye exams that are free in Juarez cost money here in the states. His idea was to create an app that had you perform the same tasks the free exam would and then have the app give you a rough estimate of your eye strength. As a team, we thought that this would be a good chance to mess with augmented reality in order to simulate the viewing distance required in the eye exams.
What it does
Currently, VisionARies functions as two separate apps, VisionARy Display and VisionARy Console. VisionARy Display is visual aspect of VisionARies - it projects all of the target letters using augmented reality at a set distance away from the user. VisionARy Console is then used to record the user's answers. This is done through speech recognition, checking to see if the user has said specific phrases and taking account the various different possibilities that may be correct.
How we built it
VisionARy Display was built in Android Studio using Google's augmented reality SDK AR Core. VisionARy Console was also built in Android Studio using Google Cloud's Speech-to-Text API
Challenges we ran into
We had to scrap most of our work after about half of the hacking time was over because we realized that Unity was not going to work out in the end, which is what made us ultimately end up in Android Studio. Since we were running out of time, we also had to switch to the two-app implementation of VisionARies as we couldn't get the two services to cooperate with one another.
Accomplishments that we're proud of
We learned two different API's over the course of 36 hours, and managed to put together the two apps despite only starting to work on them in the second half of the hackathon.
What we learned
Aside from the skills we gained learning AR Core and Google Cloud, we also realized that if something isn't working, we shouldn't wait too long to see if a different implementation would work better.
What's next for VisionARies
Our original goal was to have this in the form of one app, so that's what we aim to achieve by integrating the Speech-to-Text API directly into the VisionARy Display app and having it display the results in AR as well.
Log in or sign up for Devpost to join the conversation.