mxn220038, tao230002, rohan(utd), ayman(utd)

Inspiration

We were inspired by the increasing need to create safer online spaces and combat the rise of cyberbullying across digital platforms.

What it does

The system detects and classifies harmful messages, such as harassment, threats, or offensive language, in real time.

How we built it

We used s(CASP) for rule-based classification and integrated a language model (LLM) to analyze text inputs for nuanced patterns. The LLM helps preprocess and flag potentially harmful content, while Prolog handles logical categorization based on predefined rules.

Challenges we ran into

Designing accurate classification rules and getting prolog and python to talk to each other effectively.

Accomplishments that we're proud of

We built a system that classifies harmful content efficiently and can scale to real-world applications.

What we learned

We learned how to design effective rule-based systems, refine logical reasoning, and address challenges in text classification.

What's next for CyberbullyingDetector

Adding more criteria for reasoning for better detection accuracy.

Built With

Share this project:

Updates