Currently there are nearly 300 million people in the world with visual impairments and almost 40 million of them are blind. We wanted to help these people to be able to experience the world in a way they were never able to before. With the added difficulty of quarantine separating all of us from our loved ones, we wanted to make sure that blind people have access to the right resources. With our project, EyeSee, we hope to create an application that will allow people with vision impairments to do just that.

What it does

EyeSee is a website made for blind people. It has two major features: allowing users to interact with a speaking chatbot and identify and interact with objects in the world around them. We made sure that every single element of our website is fully accessible to people with visual disabilities.

How we built it

We built this website using HTML, JavaScript, and CSS. We used multiple different libraries such as JQuery and Tensorflow.js to build our website. For our object identification, we used a pretrained model called Coco-SSD in order to allow the website to identify multiple objects shown in the user's camera. When making the chatbot, we created a voice recognition software to allow user interaction with the chatbot. We also created shortcuts for people with visual impairments in order to allow them to access features that they wouldn't be able to on other platforms.

Challenges we ran into

We ran into a many challenges that we overcame while attempting to create our object identification system and chatbot. We had to fix many issues with the UI not working properly and not being able to implement the pretrained model in the object identification system. We also had to fix problems with the chatbot including the voice recognition not working properly and the shortcuts for people with visual disabilities.

Accomplishments that we're proud of

One accomplishment that we are very proud of is creating the object identification system because of the amount of challenges that we had to overcome in order to create it.

What we learned

We learned how to better communicate with others and we also learned many other technical skills. We learned more about machine learning, JQuery, and using external media devices in our program.

What's next for EyeSee

We plan to continue working on EyeSee and improve functionality for the visually impaired and we also hope to fix any remaining bugs that may still exist in order to improve our user's experience with this website.

Share this project: