Inspiration

We wanted to build a life hack because whenever we call any of the services department or amenities department such as telephone, electricity, insurance, bank etc we always come across IVR(Interacting voice response) system which is a time consuming task. And if we take the other rout of writing email the response time is mostly 48 hrs. So we wanted to reduce this time delay and getting in touch with the concerned team or department in a much faster manner.

What it does

The project aims to ease the load off Customer Care operator by enabling them to take up requests from various sources in different formats such as Text/ Image/ Email and then segregate the requests and queries on the basis of their nature. The customers can approach the Customer Support in 2 ways, either through telephonic conversation or through email. If the customer approaches through phone call then the support team can manually submit the concerns of the customer. Another way is to read the emails present in the official customer email id _ (crmcustomercare83@gmail.com) _. After processing the incoming request and queries through email or telephonic conversation , the application intelligently categorizes queries under various department heads by using Machine Learning model namely “Mphasis DeepInsights Keyphrase Extractor” and AI Service “Amazon Textract” which are then compared with the specializations of sale's representatives. Based on the respective specializations a representative is allocated to the customer's queries. An email containing the customer's name , email and request or query is sent to the allocated sale's representative.

How we built it

For building this application we have used Eclipse IDE where we have written our java code and created our JSF web pages.We have used JavaMail Api to read the unread emails present in the given email id. After reading the email or fetching the input text we first store it in our local and then upload it to S3 Bucket. We have created three S3 buckets. 1 .modelinputs (for input .txt files for model)

  1. imginputes (bucket for image inputs )
  2. modelresponse (bucket for model's response) S3 bucket for text input triggers the lambda function _ (lambdawiths3service) _ written in python 3.6 version using boto3 and invokes the endpoint _ (keyPhraseExtractor-Endpoint) _ of Mphasis DeepInsight KeyPhrase Extractor Model. The output of the model is stored in another S3 bucket _ (modelresponse) _ from where it is downloaded to the local and the keywords extracted from it are compared with specializations of sale's representatives present in our data base. Based on the success score an agent is assigned and will receive and email having customer's name , email and the query . If the customer's request contains an attachment then that attachment will also be sent to the sale's representative. S3 bucket for image attachment _ (imginputes) _ triggers the lambda function _ (textractFunction) _ written in python 3.6 version using boto3 and it invokes AI service Amazon Textract which process the image file and gives a text file having the text paragraphs present in the image. This text file is again given to the model's input bucket which then again triggers the Model and the same process gets repeated.

Challenges we ran into

We were new to the whole AWS Eco system and also to the Machine learning concepts .

Accomplishments that we're proud of

We worked on various components of AWS of which we were previously unaware. We also got a chance to work on new languages and used various API's .

What we learned

  1. AWS MarketPlace Model's
  2. AWS sagemaker
  3. Amazon Textract
  4. Python libraries(boto3)
  5. JavaMail API for reading email and sending email
  6. Amazon Storage Services S3

What's next for Customer Service Automation

  1. Currently this process is controlled by the operator, so we can add schedulers for email reading.
  2. We can add learning mechanism to make this application as self improving continuously.

Built With

Share this project:

Updates