screenshot of app
Knowing that visually impaired individuals face many challenges in their day to day lives, we wanted to use current technologies to help improve their lives and increase their independence with the ability to navigate the world.
What it does
Our app takes an input of a photo either taken from the camera or imported through the camera roll. The trained CoreML model then makes a prediction of the image and will speak the displayed prediction out loud.
How we built it
We used Xcode to build the functionality of the camera features and UI of the application, and then integrated the CoreML model, Inception v3, into the project. We then used the AVFoundation framework to convert the text prediction into speech.
Challenges we ran into
The integration of the CoreML model repeatedly gave errors in Xcode. I had to debug the program to figure out what was causing the problems. Making the logo fit properly was also a challenge. We had to change the size of the logo to fit in with the correct dimensions.
Accomplishments that we're proud of
Deciding a topic that was interesting and possible to finish. Completing the project within a short timeframe!
What we learned
We learned what technologies were needed to create an image recognition mobile app. We learned how to integrate CoreML models and how to convert text to speech easily.
What's next for iSee
We would like to enhance iSee, by incorporating video input and getting real time feedback so that users have a better experience. We would also like to add the ability to speak in different languages. We could expand this application to include non-visually impaired individuals, by making this into a translation app.