Inspiration
This project was inspired in part by other navigation aids. Devices such as the UltraCane and Optacon help those with sight impairments to do more, but most devices are based on technologies such as OCR, ultrasound, radar and proximity devices. No device exists to allow the blind to 'see' around them, and that's what I had hoped to create.
What it does
The NaviGlove is a portable wearable device, built using cheap hardware and existing technology. The basic idea is to use computer vision techniques to analyse the feed from a video camera, extract useful information such as edges and textures from the visual field, and feed this information back to the user in a tactile format. It is thought that through the mechanism of 'brain plasticity', over time the mind may adapt to accept information through this channel naturally, as though it were a 6th sense.
How I built it
The camera is a PS3 'eye' cam running at 90fps. The frames are processed using the OpenCV library, and important features such as edges and textures are extracted. The output signals are computed and then offloaded to the hardware controller (early prototype used a PIC, later prototypes used an audio port and op-amp circuit with an analogue actuators). The whole device is mounted on the back of a thin glove, onto which I sewed the actuators.
Challenges I ran into
As the application is realtime it is important for the control loop to complete in 1/90 seconds. I had issues getting the ultra-portable prototype to complete the loop in this time but I think it would be possible with more effort. The prototype broke spectacularly during the testing phase 4 days before the deadline - I gave test subjects anti-bac wipes to clean their hand before putting on the glove, eventually the solvent melted some internal glue on the actuator. Luckily I got a new one delivered last minute and managed to get some tests done.
Accomplishments that I'm proud of
Computer Vision is not a new field. However, through a mathematical model, I came up with a completely new way of expressing images through haptics - so far as I know it had never been done this way before. I'm extremely proud of that.
What I learned
Getting your control loop to complete quickly EVERY time is very difficult. Also, GPU programming really can be a can of worms. Finally, don't lose sight of your goals, it's easy for your end goals to drift as you progress along a project.
What's next for NaviGlove
I'm not really sure. I talked to some people about whether they'd like to try it but the prototype is still very rough. It will need a lot of development work. But I think there may be a credible business case. Perhaps I should make a kickstarter...
Built With
- c
- c++
- embedded
- opencv
- pic
- raspberry-pi
- serial
- sony-eye
- voice-coil

Log in or sign up for Devpost to join the conversation.