Inspiration

The world would be so much of a darker place without sight! For the blind, having the ability to travel independently is a major issue due to the dangers of the environment and the inadequacy of the walking stick to fully perceive its surroundings. Our group wanted to share our privilege of mobility without worries with them, thus we developed a novel system; Eyeronic

What it does

Eyeronic is essentially a GPS navigation system for the blind with perceptual capabilities. We incorporated free and easy software and APIs such as inbuilt cameras and Google navigation systems to develop a system that has real time tracking of world coordinates, environmental sensing using image processing algorithms and classification and finally vocal feedback, all for the intent of obstacle avoidance. Eyeronic 1.0 is built primarily on Python and currently features a GPS navigation system that detects current positioning and user end goal before plotting way points that gets the user to where he wants to be. All these are done through voice commands and voice recognition APIs in Python. For the context of this hackaton (with only 3 days of grinding), our team decided to narrow our scope to road crossing. Road crossing has always been a dangerous task for a blind person due to the heightened risks involved when crossing during the red light. We utilized the front cameras of our laptops to detect the presence of a pedestrian no-go sign (the red hand) at a traffic junction. If the red hand is lit, a warning message would be sent to the user telling him to stop on his tracks till the all clear signal is given to cross the road

How I built it

We conceived our milestones for this hackathon into 2 categories; database training and navigation. In order to detect a red hand stop sign, we needed a system that recognizes not only colors but features that resembles a palm. We approached this by using Haar Cascading classifiers trained on a set of positive and negative images. Positive images are images with the red hand present and negative images otherwise. Since we are working with a substantial amount of data (approx. 1000 – 10000 training data sets of images), we needed a system that removes unnecessary clutter from the environment and hones only to the regions where the red hand is present. We developed colorDetection.py, a Python based library that detects primary colors and returns a bounding box of that has the red hand as the centre piece. This would help simplify the training image collection process. We spent a total of 6 hours collecting traffic data from the internet and training our machine learning classifier. However, we realized that that the data we obtained was insufficient as a hand detector. In light of time constraint as well as a limited set of images from the internet, we decided to utilize on pre-trained data sets online to help in our classification. The end result turned out well and we managed to detect images of sampled hand signs that we showed to the camera. We then coupled our system with a filter that masks all but a spectrum of red colors to add further accuracy and robustness in the system. We also developed navigate.py, a navigation system that caters the user using Google’s Map api. Besides making use of the several waypoints provided, we also put into place a system that guides users towards these waypoints through text to speech conversion. Our script performs coordinate tracking at regular intervals in order to generate directional coordinates to ensure that the user is travelling in the right direction. We realized that the problem with Google maps is that instructions are not designed for easy comprehension of the blind. We solve this problem by introducing our own version of speeches that are easy to understand.

Challenges I ran into

Random ridiculous suggestions from team mates who are shag and overloaded with sugar Reading documentations Sucky internet Missed the Soylent delivery person :(

Accomplishments that I'm proud of

This is the first hackaton that we actually finished :/

What I learned

Never be late for soylent We learnt that we needed to learn more about learning The joy of teamwork The food here is great

What's next for eye

Move our platform to app Expand database for generic sign recognition

Built With

Share this project:

Updates