As a blind person, many experiences are painful to experience due to your condition. However, what if there was a way to fully experience the world as blindness never existed, with GuideEYE now you can.

What it does

The system is designed to help the blind to describe environment from the real-time captured images by using a mini camera. Our image captioning system is using machine learning to produce captions that accurately describe images and reads text automatically. It provides a voice assistant by using a mobile phone through the internet-connected portable devices based on Raspberry Pi platform.

How we built it

We use python as the base software on a Raspberry Pi 3 to capture the real-time images thought a mini camera. We then implemented this software onto hardware we built. The Raspberry Pi sends the data into our digital ocean server. We are using Dialog Flow as a platform to implement the interaction between the phone and the server. This was quite hard to achieve because it requires https connection for the server. Finally, the server sends the data to the clarifai API, or to the local OCR to read the text. This answer is delivered to the phone. : Capturing the real-time pi Integrated temperature sensor: Temperature Monitor Personal Computer Google Cloud Mobile Phone

Challenges we ran into

Best use of machine learning (Accenture) Best IoT hack (Nordic Semiconductor)

Accomplishments that we're proud of

Implement a working remote camera with the Raspberry Pi 3. Implement an HTTPs certified server into Digital Ocean. Interaction with Dialog Flow.

What's next for GuideEYE

The option of using automatic reflex the user can control the actions of the device. For example, when sensing a disturbance in the environment, the device will notify the user who will then be able to control the machine to avoid the problem. Adding more sensing elements to capture information to provide a more accurate guide for a wide range of disabilities.

Share this project: