Inspiration

Our team believes that development of technology should benefit all communities. We happened to come across an interesting technology, IBM Watson, and we wanted to create a solution that might benefit the community as a whole.

What it does

A-Eye is meant to assist people with visual impairment, lessen the daily life struggles that people of visually imapaired community encounters in regular basis. A-Eye achieves this goal by providing a voice user interface that allow visually impaired individuals to easily locate objects around their environment.

How we built it

We builts our user interface and front-end using Android Studio, stored business logic and handled server request in the backend using node-js and express framework, the final model of our server is hosted publicly on AWS C9 to be accessible by public, and utilized IBM watson's visual recognition API to implment object recognition.

Challenges we ran into

During our production, we wanted to utilize the dialog flow to analyze the voice and generate a response that is more robust. However, we had difficuties connecting to the dialog flow and could not use it.

Accomplishments that we're proud of

Learned Android mobile application frameworks and features and created easy-to-use mobile application. Being able to manage multiple subsystems Using a structured software stack to build working front/back-end points Analyzing customer data using an unsupervised learning algorithm and applying knowledge gained from the analysis in real application Designed a simplistic UI that is intuitive and easy to interact.

What we learned

Features that are provided by the various APIs and how to implement them

What's next for A-Eye

Learn more in depth about different machine learning algorithms, evaluation techniques, and analyzing/visualizing methods. Research more APIs that are accessible to incorporate them into future events. Building more scalable mobile application accessible by various platforms (Android, iOS, etc.)

Share this project:
×

Updates