Meet the team

Angela Miles helped configure the hardware for the demo for the Arduino and created the presentation. Brandon Wheeler helped source a reliable camera detection API and submitted the project. Zach Stehura helped configure the software for the demo for the Arduino and pitched the presentation.

Inspiration

We wanted to create a system that better solved an issue in the world. We believe that our technology can be used in conjunction with other sound-related queues to better help blind people navigate the ever-changing and more congested than ever environment of today.

What it does

It uses a combination of ultrasonic technology and existing camera detection APIs to warn the user of nearby objects in their path in real time.

The hardware

The Arduino: We used an Arduino Mega along with Arduino software to code it. The additions we used for this demo were just the pinging sensor and the buzzer for audible alerts. The Camera: We used an HTML demo hosted by ModelDepot using Tensorflow.js. It's a free to use and opensource demo that we're using-- however, the Tensorflow.js API can be used to build from scratch later down the road on our own.

How we built it

We built it using the Tensorflow.js API for camera object detection and a pinging sensor on an Arduino for the ultrasonic detection as well as the vibration motor for alerting the user. We also plan on adding sensors like an accelerometer and gyroscope to help with this detection process.

Challenges we ran into

Because of the drawbacks of ultrasonic object detection, we wanted to create an economically viable yet reliable system which we overcame with the introduction of a few low-cost cameras.

Challenges for the future

With all of these sensors and cameras with onboard detection, along with the necessity of needing this system to be as fast as possible and with as low latency as possible, processor speed is a huge factor that we needed to be aware of-- all while keeping costs reasonable. We believe we've solved this issue with lowering the camera detection quality to speed up the Tensorflow.js detection API as well as using Bluetooth transmitters to have the two wirelessly linked devices communicate together to reduce the need for separate high-level processing. With this, we can get detection using the ultrasonic sensors down to 1/10th of a second and detection using the camera sensors down to 1/3rd of a second.

Accomplishments that we're proud of

Ultrasonic object detection is a hard thing to grasp; you can't sense it to know that it's working as intended. We're proud of creating a demo that uses the Arduino's capabilities of multiple sensors to have an audible alert play during detection as part of our demo.

What we learned

Our team is comprised of computer science majors-- 2/3rds of which are Python and Visual Basic programming concentrations. Creating a fabric demo, creating a C++ ultrasonic detection system using the Arduino, and sourcing and understanding the javascript-based Tensorflow API were all challenges that each of us had never dealt with before and that each of us had to face together.

What's in the name, Blindlets?

It's a combination of Blind and Anklets. Some people in our team don't like this name. Those people aren't seeing as clearly as they would with Blindlets.

Built With

Share this project:

Updates