Inspiration
What it does
Uses Amazon Echo, and a combination of API's to assist visually impaired people do the following tasks:
- Read news articles collated from various sources, and categorized, using Amazon Echo
- Allow them to be aware of their surroundings. Take a photograph using an Android app. Use Echo and Microsoft Cognitive API's to describe what the user clicked.
How I built it
Used Amazon Alexa, Microsoft Cognitive API's, Flask, Apache Cordova (for a minimal Android app), JavaScript, RapidAPI, Clarifai API's to build a system which achieves the tasks
Challenges I ran into
The biggest problem we faced was, that RapidAPI was not playing really well with EC2. The Microsoft Cognitive API's were the best suited for our purpose, but they were working rather inconsistently, and we were not able to figure out the reason for a really long time.
Accomplishments that I'm proud of
We worked with a lot of moving components, and API's, and we achieved whatever we sought out to achieve while starting the project.
What I learned
How to use a lot of API's, and combine stuff together. Also, in the beginning, it took quite a lot of thinking to understand the problems from the perspective of a visually impaired person.
What's next for Lightbringer
Come up with some more use cases, flesh out the idea a little more
Log in or sign up for Devpost to join the conversation.