We want to make our society more accessible to people with disabilities and make sure that areas where injuries could occur are quickly addressed.
What it does
The application detects objects through a device worn by the user and submits an image to an online database when the TOF sensor detects that the user is about to collide with an object (within 70 cm). This image can be viewed on our website along with the time and date of the collision.
How we built it
Hardware: Utilized time of flight (TOF) sensors to detect distance between the users and the objects in their path. We read the values through an Arduino and sent it to python.
Software: Python was used to determine when the user is within 70 cm of an object and sends a snapshot of what the user was seeing at the time of interest. The image sent uses object recognition from online libraries to determine what is in the snapshot and annotates the image.
Challenges we ran into
Hardware: Many parts did not present optimal values and problems with connecting to the Arduino were prevalent throughout the project. Example: Ultrasonic sensors would not provide an accurate distance value past 50 cm.
Software: Libraries sometimes mismatched objects, so we had to retrain data. Another issue that we faced is getting Python to register a USB-C hub to detect communication to the Arduino.
Accomplishments that we're proud of
Hardware: Getting TOF sensors to display data and read into Python.
Software: Object recognition, running multiple processes at the same time through multi-threading (a-sync). Processes include data collection from TOF sensor and running live object recognition.
What we learned
- Learned Arduino implementation into python
- Learned implementing libraries for object recognition.
What's next for Spatial
Improvements to design include detecting more objects, making application more accessible by including location data, and a full response system for EMS.