The surge of misinformation has reached new heights during the COVID-19 pandemic that UNESCO itself recognizes it as a major threat to the people and termed it as a "disinfodemic" accompanying the "pandemic". Under such conditions people often believe in fake information which further perpetuates and causes mental strain and anxiety.
False information and/or myths like "drinking potent alcoholic drinks as a cure" can not only overshadow the correct measures of precautions but also have adverse long term effects.
However, if we could make collect information from credible sources like WHO, UNESCO in real-time to answer the user's queries and supply credible information then this can reduce the spread of the "disinfodemic". This in the long run would :
1) Reduce experiments with unproven cures/remedies 2) Reduce the high level of fears 3) Prevent amplification of false information 4) Reduce anxiety levels and preserve mental health 5) Prevent contamination of true facts
What it does
The InfoBot provides the user with credible and updated information regarding the number of cases, recoveries, precautionary measures, and a lot more. The InfoBot receives the user's input via voice commands. Depending on what the user requires the InfoBot fetches the latest information from the internet via web scrapping and provides it to the user in the form of audio responses.
It has the following features:
1) Audio responses and speech recognition. [User commands] 2) Choice for male / female voice assistant. 3) Advanced error handling conditions. 4) Mental health support. 5) Includes basic support for North American slang. 6) Accepts voice commands for a hands-free experience. 7) Accessible to blind and deaf people. [Provides output as both audio and text] 8) Includes exception handling for personal questions. 9) Includes the ability to search for off-topic questions.
How I built it
The InfoBot is primarily a web scraping project integrated with a preliminary response handling algorithm, speech recognition API's and text-to-speech conversion libraries [pyttsx3]. The web-scrapping is done using the newly released selenium-python support.
The InfoBot uses an algorithm that I built to deliver appropriate responses. I call this the "KeyMatter" algorithm as it identifies the key ideas behind the user's query and uses it to deliver appropriate responses.
The algorithm breaks down the user inputs to separate words and converts them to lower case [i.e. makes it case insensitive]. Then it matches these words with the keyword-tags database which I created. Then based on the tags identified in the user response, the algorithm obtains the latest information through web scraping and delivers customized outputs through the device audio.
Challenges I ran into
I had never used selenium python before and hence ran into many problems during web-scrapping. Especially selecting the desired element using the source code of the website was very confusing. Furthermore, as I had to create my own database and algorithm, I ran into many logical fallacies which took great efforts to fix. However, at the end it all paid off!!!
Accomplishments that I'm proud of
I am very proud of successfully creating and implementing my own response generation algorithm and implementing a fully functional exception handling backend.
What I learned
I learned to use selenium-python for web scrapping and creating a single program with three major cross-functioning libraries and/or APIs, namely pyttsx3, Google speech-to-text API, and selenium-python.
What's next for COVID-19 InfoBot
The next stage is to merge the COVID-19