How our program works: In our program, we have a scanner object that stores the user's response. Once they've typed their response in the compiler we'll compare it to the list of hateful, offensive, and explicit language that we have in our document using a for loop and a while loop to iterate through it. If the user's comment includes any of the words in our document, we'll return a message saying it contains inappropriate/explicit language. If it doesn't include anything harmful, our program will return a message saying the user's comment is okay.

How we built our program: We built it using Java, HTML, and CSS

Challenges: It was difficult to create the algorithm that successfully detected hateful speech, especially since we built if from scratch (no API's or other helpers) We had many issues with finding the right platform to incorporate HTML front end and Java backend (we tried a variety of things like VSC, Eclipse, etc. as well as a servlet

What we learned: I/O processing in java (reading and writing to txt files) More HTML and CSS (advanced styling to make out website cohesive and user-friendly)

Built With

Share this project:

Updates