Inspiration
We were brainstorming possibles ideas and we all agreed that we wanted to something useful that would have a big impact.
What it does
We designed a working prototype that will help people with visual impairments navigate through unknown environment.
How I built it
Raspberry pi, sunglasses, and very little sleep. We each coded different parts of the product and compiled it at the end. Ultrasonic sensors were used to determine distance and alert new users.
Challenges I ran into
At first, we had a lot of trouble coming up with ideas. We wanted to do something with VR but it was not within the scope of our abilities. At around 11pm on Friday we finally decided on an idea and started working. Many of us did not have knowledge that we could bring into this hackathon so there was a lot of learning on the spot and adapting to different situations. A lot of the time possible solutions we thought of were not feasible due to lack of hardware or lack of data.
Accomplishments that I'm proud of
We managed to complete a final working prototype. We cross compiled opencv with little prior experience. None of us had experience with Opencv before the hackathon and only one of us had played around python.
What I learned
Implemented text to speech with python. Working with real time data & processes. Some of us spent like 6 hours trying to figure out how to download python, opencv, and numpy (no programming experience haha) so that we could use the programs to recognize faces (and we initially wanted objects as well but we did not have two cameras).
What's next for I See Pi
Given more time we would implement object recognition and more sophisticated localized GPS. We would also make the parts smaller.
Built With
- google-text-to-speech-api
- opencv
- popsicle-sticks
- python
- raspberry-pi
Log in or sign up for Devpost to join the conversation.