Inspiration

What it does

A chatbot that analyzes conversations to detect any harmful or toxic behaviour through pattern-based keywords. Determines how severe the messages are and the person who is being the most harmful or toxic.

How we built it

Using a front-end only prototype with HTML, CSS and Javascript

Challenges we ran into

Given the limited time, we did not go in depth with designing the user interface.

Accomplishments that we're proud of

As first-time hackers, we spent a considerable amount of time on understanding and integrating the frontend and backend features. We also implemented our very own database from scratch with scalars.

What we learned

We learned a variety of new things such as -- Frontend languages and how to combine each component of the program into one.

What's next for Conversation Toxicity Analyser

We hope to make the user interface more friendly and visually pleasing. Additionally we hope to integrate AI to detect harmful content in the language and handle cases such as sarcasm and vague texts, whilst also working with a larger database.

Built With

Share this project:

Updates