Inspiration
Time and privacy are precious attributes in anybody's life. This is especially true for a visually challenged person. Considering the fact that until now, their major source of help for filling out information in forms involves another person, whom they have to divulge personal and/or important sensitive information to, (in the presence of possibly others), we realize that automating this process and making them self sufficient would go a long way in their day to day lives. Doing this with no other interface and by making it seem like they are still talking to a human being, with the perks of the conversational ability and without the shortcomings of the privacy issues, would be even better since it is essentially doing the same thing.
What it does
As a system to assist the visually challenged, we have many approaches to solve different problems:
- Considering most forms are available online or at least contain a digital copy, we search for a PDF copy of the same form by crawling the web, and parse it to enable interactively filling it up. Interfaced with Alexa, an assistive bot, we make the process completely speech based thus making it more natural.
- The inputs to the platform are however not limited to just PDFs! It can effectively fill online HTML forms and for those PDF forms without an online link, we are also providing an option to click a picture and then fill the same using Alexa.
How we built it
- Amazon Alexa's developer APIs
- Python Flask
- Redis
- Amazon AWS
Challenges we ran into
- There were many moving parts and it was actually quite difficult to interface one with another. Alexa especially has many guidelines which one must follow to for it to function as intended. Making the conversation seem natural took a big chunk of our time.
- Not all PDF forms are 'fillable' online and creating an all-encompassing python interface which can both read and write back to the pdf was quite challenging.
Accomplishments that we're proud of
- Much too often CS students just fall behind projects in big corporate companies... but this hackathon was a really refreshing to all of us. The thought of helping visually impaired people, in any way, and thus ultimately contributing back to the community is very heartening to us.
- To think we made this much progress in a span of less than a day is point of pride to everyone on our team.
What we learned
- Alexa has so much potential!
What's next for Candle
- Integrate OCR so that images (of PDFs) can be more accurately parsed.
- The platform is actually just a starting point. The whole pipeline was built to accommodate further extensions. The inputs can be anything and Candle skill can converse with people, thereby helping them. The input can be a passage
Log in or sign up for Devpost to join the conversation.