Inspiration
The idea of helping blind people to identify colors has always been a unique and interesting concept. Therefore, we want to contribute to and develop this idea in order to help people in need. Nowadays, with the fast-growing AI technology, we can take this advantage and apply it to this project, thus, we can improve its efficiency and productivity.
What it does
The Little Eye application helps people identify the surrounding objects and their colors only by capturing an image through their phone. All of the actions can be done with voice control.
How we built it
First, we brainstormed our idea on the board. We came up with a feature design, then followed up with a main sequence of the program. We tried to find ways to simplify and make everything connect to each other. Then we decided which technologies we were going to use. We listed out every scenario and did research to get the best solution.
Challenges we ran into
A lot of challenges since this is our first time approaching a ton of new technologies (AI, text-to-speech, image recognition, etc.). We tried to get API, but we ran into many syntax problems. Moreover, the project was not easy to finish within 2 days.
Accomplishments that we're proud of
Each of us learned many things that we haven't learned in school. We had a chance to network and made a lot of friends.
What we learned
We learned how to fetch APIs from open sources and learned how to do a project as a team.
What's next for Little Eye
We will keep building this program to make it faster, optimized, and be able to handle stronger scales.
Built With
- firebase
- google-cloud-speech-to-text
- google-cloud-text-to-speech
- google-cloud-vision-api
- javascript
- python
Log in or sign up for Devpost to join the conversation.