Inspiration
Due to the current COVID 19 situation, governments have enacted 6 feet social distancing guidelines to ensure the stop of the spread of the virus. The consequence of this policy is the restriction of the amount of people in a single waiting room to only a few family units. Thus, many businesses and institutions have experimented with outdoor seating/waiting areas to maximize the amount of people that can spread out. This is an okay solution, but not practical enough for businesses with a moderate amount of customer traffic. We decided to engineer a solution to this scheduling problem by creating a chat bot that users can text to schedule appointments with the business/institutions.
What it does
Our specific example was done for a doctor's office, a high impact area where reducing waiting times is all the more important, but since we already have a framework, creating another bot that can include information about another business like a salon is a very manageable task. The doctor must first login with their google account to allow our app to access their google calendar. Then, users can start make requests to the bot (in plain english!) by texting our Telegram bot. Then, the bot will perform tasks like scheduling and removing appointment as well as reply with intelligent responses by interpreting what the user said.
How we built it
Each of us developed on a Google Cloud virtual machine (through ssh).
Using a messaging platform for our chatbot like Telegram allows users to not have to search online and navigate through a website but rather use a simple search bar to start speaking to our bot. Then with machine learning and natural language processing, we are able to interpret what the user is saying as clearly as if they had pressed a button. Our scalable Kubernetes backend handles the requests and creates an appointment on the doctor's google calendar (using Google's API), simplifying both the patient and doctor experience. More technical details about the exact choices for the technology can be found in the next paragraph.
The way rasa, the open-source technology we are using for the natural language processing, works is through several different servers that handle different operations: entity extraction, training, and choosing actions. I knew this meant our backend required several docker containers running at once and communicating with each other through a specific port setup for the bot to run. At first, I used a docker-compose.yml as the specification for all these docker containers. Since I was already familiar with how docker networks worked, there were few issues in creating this file and letting it run properly. What makes docker-compose worse than Kubernetes in our use case is that it is not self-healing. Sometimes one of our moving components would unexpectedly fail, and this would bring the whole system down using docker-compose. In contrast, Kubernetes can detect when a service fails and restart it. Furthermore, Kubernetes allows us to support more users when we can expand in the future, since it was also allocate more clusters (groups of these docker containers) if our bot is handling high traffic.
Challenges we ran into
Before this hackathon, none of us had never used Kubernetes before. I (Vlad) had read articles explaining how it worked on a higher level, but had never done anything with it. In hindsight, this was an excellent opportunity to learn even if there were a lot of bugs and issues I had to figure out. The most difficult part was mounting multiple volumes of memory to one service when I was trying to give nginx access to my default.conf configuration file and my ssl certificate. This was all because the telegram API only accepts requests from https connections. After searching google extensively, I found a stackoverflow answer that linked to the documentation showing my method would not work and I needed to switch the way I mounted my files. The whole night was filled with these sorts of very obscure bugs, because the code I was writing was very specific to my application.
Accomplishments that we are proud of
- Not Dying
- Learning about the steps in natural language processing through an abstraction (rasa)
- Using Kubernetes to create a somewhat complex backend setup (4 services in total).
- Working prototype that can be generalized
What we learned
- Google Calendar API
- Telegram API
- Solid development pipeline for connecting ML code to a backend which can serve it
- and a lot more...
What's next for TenBot
We hope to remake it for a few more examples and offer it to local businesses for free to gain valuable feedback. Then, try to automate the process of creating these bots by parsing business website html, getting predefined input fields, and other ways to simplify and abstract the process of making TenBot.
Built With
- docker
- duckling
- kubernetes
- nginx
- python
- rasa
- yaml
Log in or sign up for Devpost to join the conversation.