Aggie Eyes is an assistive technology for visual impairments- but nothing like anybody's made before. This technology integrates existing tools into an all-encompassing device that aims to give ‘eyes’ to the visually impaired. People with any visual impairment usually have an enhanced sense of hearing which helps them navigate everyday life. For this reason, our aim was to develop a device that would provide an auditory input without affecting hearing. Aggie Eyes utilizes a bone conduction headset equipped with the following: micro cameras in four directions (forward, backward and to each side), an integrated object detection platform, mapping software and vibration patches. Here’s how it works: the headset sits over the user’s ears and the headband wraps around the back of the cranium. Also included is an adjustable elastic band for customizable fit. The cameras continuously capture images of the surroundings and these are simultaneously processed. The visualization software carries out object detection and analysis and instantly relays the information to the text to speech engine which communicates directions to the user. The built-in mapping platform can be utilized to search for directions. The user can also give spoken instructions to the device to get them where they need to be. In case of a sudden object or person appearing in the user’s path, the system will signal the vibration patches placed on the sides and back of the headset which will vibrate to warn the user of imminent danger. The device has an inbuilt battery which allows mobile use.
We have built a 3D printed prototype of the device with PLA. The final product will be made using a variety of materials and processes.
The main challenge was that we had limited time and resources so we could not make a working model of the device.
The biggest achievement of this project is that we are providing the user with an input complimentary to the ones they already have and without taking away another one.
Since this is a highly cross-disciplinary project we had to gather and combine information from varied disciplines as computer science, imaging, auditory science, etc.
We hope to continue working on this device and with time develop an improved design. The eventual aim is to build a working model of the device and test it in real-life situations.