Inspiration
Our inspiration stems the difficulty and lack of precision that certain online vision tests suffer from. Issues such as requiring a laptop and measuring distance by hand lead to a cumbersome process. Augmented reality and voice-recognition allow for a streamlined process that can be accessed anywhere with an iOS app.
What it does
The app looks for signs of colorblindness, nearsightedness, and farsightedness with Ishihara color tests and Snellen chart exams. The Snellen chart is simulated in augmented reality by placing a row of letters six meters away from the camera. Users can easily interact with the exam by submitting their answers via voice recognition rather than having to manually enter each letter in the row.
How we built it
We built these augmented reality and voice recognition features by downloading the ARKit and KK Voice Recognition SDKs into Unity 3d. These SDKs exposed APIs for integrating these features into the exam logic. We used Unity's UI API to create the interface, and linked these scenes into a project built for iOS. This build was then exported to XCode, which allowed us to configure the project and make it accessible via iPhone.
Challenges we ran into
Errors resulting from complex SDK integrations made the beginning of the project difficult to debug. After this, a lot of time was spent trying to control the scale and orientation of augmented reality features in the scene in order to create a lifelike environment. The voice recognition software presented difficulties as its API was controlled by a lot of complex callback functions, which made the logic flow difficult to follow. The main difficulty in the latter phases of the project was the inability to test features in the Unity editor. The AR and voice-recognition APIs relied upon the iOS operating system which meant that every change in the code had to be tested through a long build and installation process.
Accomplishments that we're proud of
With only one of the team members having experience with Unity, we are proud of constructing such a complex UI system with the Unity APIs. Also, this was the team's first exposure to voice-recognition software. We are also proud to have used what we learned to construct a cohesive product that has real-world applications.
What we learned
We learned how to construct UI elements and link multiple scenes together in Unity. We also learned a lot about C# through manipulating voice-recognition data and working with 3D assets, all of which is new to the team.
What's next for AR Visual Acuity Exam
Given more time, the app would be built out to send vision exam results to doctors for approval. We could also improve upon the scaling and representation of the Snellen chart.
Log in or sign up for Devpost to join the conversation.