Inspiration
We wanted to create a solution that would make an impact for the visually impaired community. We decided to build something that replaces a cane and reduces the load of carrying a stick, but also integrate it with something that the user carries on a daily basis.
What it does
An app that utilizes machine learning and image recognition technology to detect various obstacles such as pedestrians, trees, still objects and even objects in motion. The app sends out vibrations as instructions to the user to avoid the object.
How I built it
Using data regression, we deduced a formula to determine the distance to the object detected. We used machine learning and image recognition to differentiate between various obstacles. We then used a pre-trained classifier from TensorFlow to provide information about the various detected objects. Then, we used Android Studio to create a basic app that would showcase our concept. After, we then implemented our algorithm that detected the proximity of the project. One feature we look to add is an algorithm that aids in determining a safe path for the user.
Challenges I ran into
We initially wanted to use OpenCV to detect the objects with high accuracy and efficiency. We quickly realized that it was not feasible to train a classifier and create app that runs quickly and efficiently on mobile devices. with this so we switched to java and used android studios to build the app. Figuring out the right algorithm for aspect ratio was a huge obstacle that we moved passed, after testing out a great amount of raw data. Laying down a UI which is user-friendly for the visually impaired and make it look aesthetically pleasing was a great difficulty.
Accomplishments that I'm proud of
- Made image recognition possible.
- Understood machine learning and successfully implemented it.
- Make a good marketing strategy.
- Make something that might be a step towards something big.
What I learned
We as a team, learned opencv, python, fidgeted around android-studio a lot. We also leaned to use a bunch of video editing softwares while working on the marketing aspect of the app. We also learned during the process of development of the app, the usage of a powerful device, aka, the DragonBoard. We as a team learned how to crunch raw data into something that made sense.
What's next for Cicerone
Using edge technology to detect lines that would determine the exact path the user is currently on and make sure that he/she stay on track, eventually taking the user to the specified
P.S. We are already on it.
Built With
- android
- android-studio
- java
- opencv
- tensorflow
Log in or sign up for Devpost to join the conversation.