Inspiration

Globally, uncorrected refractive errors are the main cause of moderate and severe visual impairment; cataracts remain the leading cause of blindness in middle- and low-income countries. About 90% of the world's visually impaired live in low-income settings. Most of the times existing assistive devices fail to provide proper help in perceiving the environment due to lack of sensory information about surroundings. The existing canes only give information about the surface the person is tapping.

What it does

A smart vision-based analysis of the environment was developed with the help of Deep Neural Networks and integrating with audio feedback to the person. In addition, a range sensor was attached at the base to give feedback about the major irregularities on the ground.

How we built it

The 2 major things to be dealt with were the hardware and software part. The hardware consists of a retractable design modified out of a monopod stand used for DSLR cameras. The handle of the cane was 3D printed and attached to the monopod with 1/4" screw. An ultrasonic sensor module HC-SR05 was attached to the bottom surface at the end of the cane. A 720p HD camera was mounted onto the cane near the handle to enable give a good field of view for analysis. Software:

Challenges we ran into

1) We had tried pairing bluetooth speaker to the Jetson TX2 for audio feedback.

Accomplishments that we're proud of

The accomplishment we are proud of is that our project has the potential to help people with visual disabilities and could help them lead a better life.

What we learned

What's next for Smart Cane for visually Impaired

The future work of this project would be to get the Image captions accurately with a model that is trained on Jetson TX2.

Built With

  • jetsontx2
  • inceptionv3
  • neuraltalk2
  • tensorflow
  • solidworks
Share this project:
×

Updates