Currently, Swiggy has no option for people who don't have technical know-how of using an app or a web based solution. This caters to a large market in both Tier 1 and Tier 2 cities. We aim to bring them on board. Also, there is no support visually impaired people.
What it does
It creates a layer of voice abstraction over Swiggy functions and enable voice based application.
How we built it
Challenges we ran into
Good speech to text and text to speech conversion Keyword Extraction Noise removal
Accomplishments that we're proud of
API based solution where existing Swiggy Apis can be plugged. Support for any client for voice based activation. Making app visually impaired friendly.
What we learned
A lot, in terms of NLP, NER, POS and machine learning and voice recognition features.
What's next for swiggy-hackathrone-demo
Enabling phone and call based activation Integration with Swiggy app and site for increased accessibility