Inspiration

Countries where cyber bullying,cyber harassment and online threatening are rampant, these type of activities go unnoticed. And if the trajectory continues as it is, the victims may struggle mentally or in worst cases may end up loosing their lives (suicide mostly). So this inspired me to think of building a chatting app where actions of these types of behaviors are strictly taken against the abusers

What it does

All the chat records gets stored into a database( DynamoDB) via API gateway(websocket) and lambda, from where the SageMaker can analyze the message patterns using its machine learning algorithm, weighing each messages threaten level by score, looking for abusive users.

How I built it

First the application was developed using js, html and css and deployed into S3. Then both API Gateway and Lambda sdk were created to establish a web socket passage between the two end users. Later on DynamoDB was used to store and keep the records of data (userID, messages, senders, receivers timestamp, etc). SageMaker was then created to deploy and train machine learning algorithms to learn the notorious activities. In order to trigger SageMaker another Lambda was created with function URL triggered by s3 bucket to invoke SageMaker in real-time

Challenges I ran into

Since it was my first time dealing with machine learning plus aws backend development , I had a hard time to cope myself up. So most of the codes were crashed several during the runtime processes and took me days to fix it.

Accomplishments that I am proud of

To be able to know more about AWS resources and their functions. And also I am happy to have my own machine learning algorithm built as it was my life long dream to make one.

What I learned

It was really a good opportunity to expose myself to the hybrid environment of machine learning and cloud resources. It helped to enhance my cloud skills and knowledges on various resources and also machine learning.

What's next for Abusive user detecting system

I am planning to create an advanced version of it where the system can detect the brutality messages as also predict what the message structure will be as typed in, in real-time.

Built With

Share this project:

Updates