Inspiration
A member of our team has a relative who recently experienced a severe degree of visual impairment. During discussions, this individual expressed their struggles in finding a suitable solution to their condition. In response, our team undertook a comprehensive exploration of existing solutions and conducted a thorough analysis of their respective benefits and drawbacks. The result of this investigation is a novel solution that seeks to address the challenges faced by the afflicted family member.
What it does
EchoVision is an app that uses a smartphone camera to capture live video footage. It then uses a unique machine learning depth-detection model to identify the distance of the objects in the center of the footage. The app converts this information into a sound signal, with higher-pitched sounds indicating close objects and lower-pitched sounds indicating distant objects. This allows users to quickly determine if there are nearby objects simply by pointing their phone camera in a specific direction.
How we built it
To complete our task, we took advantage of multiple frameworks, technologies and programming languages. To interact with the front end of our application, we utilized the React Native framework along with JavaScript and TypeScript. This process allowed us to initialize our program, grab information from the camera, and output a sound according to what was seen in the picture. To connect our backend ML model hosted on Python to our React Native frontend, we created and used a Flask REST API, where the React Native frontend sends requests to it, making it more loosely connected to the backend details. Our ML Model was trained, setup, and used in Python with extensive use of the library PyTorch. With this model, depth is calculated from the RGB image and information on the appropriate output frequency is passed back to the API for ReactNative to use.
Challenges we ran into
As it was the first time integrating and using these frameworks for many of our team members, we faced several issues, particularly in setting up environments and dependencies. We also initially faced issues with communicating between the API and ReactNative due to internet shortages and weak connections.
Accomplishments that we're proud of
We are really proud of getting the REST API communicating with the ReactNative program. We were not familiar with most of the technologies we implemented, so we experienced a great deal of failure and growth throughout this hackathon.
What we learned
Be realistic. You need to be adaptable when the situation demands, otherwise you'll never overcome the plethora of obstacles that you may face. In completing this application we learned to find a balance between being realistic while also persevering when we got stuck.
What's next for EchoVision
At the moment, we plan to conduct minor debugging and decrease latency, which is easy to do since we currently host our code locally. Once working, we plan to deploy our application through cloud-based solutions such as AWS. In the future we also plan to reach out to potential users of our application and improve our app to better fit their needs.
Log in or sign up for Devpost to join the conversation.