Inspiration

Our project consists of three parts: a Customer Service Chatbot, a Informational Website, and Speech to Text Recognition. The end goal is to integrate these components into a complete customer service application. We were inspired by our dealings with old-school robocalls and chatbots seemingly constructed out of hundreds of if/else statements. The future of customer service is undoubtably in AI, and our idea is to use data stored about user habits and demographics to personalize the chat experience.

We give an example travel agency interested in providing an enjoyable experience to its customers. They have some tabular data that stores users and identifies them by ID #. The chat bot queries this ID and then personalizes its responses based on that info.

Our Speech to text is currently set up for authenticating users on a call. It will ask them to state their name and then check it against the database.


What it does

  • The chat bot uses a system prompt that tells it to act as a customer service representative, a task prompt dictating how it should respond and where it should direct users to given their conversation. It also pulls user information into a custom user prompt and concatenates these with a chat history to give it some memory of the conversation with a directive on how to act

  • The speech recognition model is trained with a text file containing all registered entries to a database to specifically detect the words within that file. It was constructed using the Azure Speech SDK in Python.

  • The website serves as an informative hub, providing project details without requiring any coding knowledge.


    How we built it

  • The chat bot uses a series of python scripts that form a chat loop with some OpenAI API calls. It first asks for an ID, and then greets the customer before looping with a user prompt and a gpt response. The loop also accesses the user data to tweak responses to be more personal, like mentioning previous trips with the company or using hobbies to give destination suggestions.

  • The Speech Recognition model was coded in python with calls the Azure API. It first asks for the user's ID number. If the number cannot be found, it will ask the user once again for a valid ID number. If an ID number is found, it will ask the user for their full name to confirm if it matches with the system's entry. If the user successfully passes through both levels of verification, they will be able to access the Bridgie system.

Example:

In this example, the User with the ID number is entered as "Bilbo Baggins" in the database:

REPRESENTATIVE: Please state your User ID number.

CUSTOMER: 1

REPRESENTATIVE: Please state your name to confirm your identity.

CUSTOMER: Bungo Baggins

REPRESENTATIVE: Name does not match. Please try again.

CUSTOMER: Bilbo Baggins

REPRESENTATIVE: Hello, Bilbo. Welcome to Totally Legit Travel Agency!

  • We developed the website using Notion and Canva, resulting in an excellent resource for learning. During the construction process, we encountered challenges related to content placement and the time-consuming task of creating the graphic design elements.

Challenges we ran into

  • The hardest part of the chat bot was engineering the prompts in a way that led to predictable results. A problem occurred when we created the system prompt by concatenating the aforementioned prompts. The chat bot would try to finish the customers statement instead of responding to it, so we had to add a break and a final instruction to respond to the last customer message
  • A challenge we had with the Azure Speech SDK was the fact that we did not have a built-in microphone in one of our laptops, so we were required to use a webcam or ask another team member to run tests with the code.

Accomplishments that we're proud of

Although we didn't end up combining the various components into one full stack application, each component works really well and we consider them a success. Our chatbot does in fact personalize its statements, and our speech recognition is really good at picking up names and checking them against the list we have.


What we learned

We came away from this project with the understanding that the planning stage is the most important. Near the end of our development we discovered some tools that would've been useful earlier on, but due to time constraints we couldn't work with. Each of us got to explore and learn about some powerful tools and came away from it as better developers.


What's next for CCBridge

Looking ahead, our goal is to leverage a full-stack framework for the project's future development. We plan to dedicate the next few months to learning essential technologies, including JavaScript, Tailwind CSS, React, Node.js, and MongoDB. These skills will enable us to enhance functionality and ensure effective programming. We can also work to add robot-call functionality with the speech-to-text and adding text-to-speech as well

Built With

Share this project:

Updates