We want to help those who struggle with visual impairment. We believe that making such a project can improve their quality of life.
How we built it/How it works
Challenges we ran into
Hardware often have issues when working with other pieces of hardware. Each device we integrated into the project presented new challenges to over come.
Azure was completely new to the group and learning how everything functioned was quite the challenge.
Accomplishments that we're proud of
We are proud of the amount of APIs and technologies that were implemented in this project. Each API can be a struggle to figure out, but in the end, the final product exceeds our expectations.
What we learned
This hack gave us the opportunity to use several new technologies and API implementations. We used the Google speech-to-text API, Clarifai API, and the Microsoft Cognitive Services Computer Vision API . We were also able to explore the many services offered by Azure. While in the end we used their storage service, web hosting service, and cognitive image recognition service, we also tested the possibilities of using a SQL database and using their speech to text software.
What's next for Theia
One of the many great things about using the Microsoft Cognitive Services Computer Vision API and the Clarifai API is that as those services grow and become more effective at identifying the contents of photos, our application can provide increasingly accurate descriptions of photos taken by the users. On our end of the device's development, we would like to develop a more detailed display website to allow users to track the photos they've taken, as well as to potential provide a user login system to centralize the storage of user data. Finally. we would like to minimize the size of our device and eliminate the obtrusive cables by connecting it to the internet via 3G or by connecting to the user's cellular device.
For this hack we've registered the sightb.org and sightaloud.tech domains to be redirected to our website.