Inspiration

Our inspiration for this project was that our group is very aware of the racism and sexism around the world that occurs everyday. So we created a bot that can help detect racism/sexism in the subtle nuances of other people's texts. This model will also help keep us in check from saying anything racist or sexist to other people.. We hope that this model can help change the world by making it safer and inclusive for people of all colors and genders.

What it does

Our model reads in the raw text as input. Then it gives an overall analysis of if it is racist or sexist or neither. Then it analysis each individual sentence in the raw text input to give it a final classification of if it is racist or sexist or neither. Then it visualizes the timeline of the sentences on a graph, showing the confidence of racism and sexism over time.

How we built it

We used flask as a web framework, python with HTML bootstrap, pandas, scikit-learn and matplotlib, and Heroku. Flask was used as the front-end microweb framework. HTML was built for the website page. Python was used as the backend. Pandas was for data manipulation. Scikit-learn was for machine learning. Matplotlib was for data visualization. Heroku was used to host our application into the cloud.

Challenges we ran into

The challenges we ran into include data cleaning, finding good datasets, and relearning python packages.

Accomplishments that we're proud of

We are proud of our model for being able to successfully detecting racism and sexism with good accuracy and precision.

What we learned

We learned that finding datasets of niche projects can be very challenging. We also gained more experience working in a group.

What's next for Prejudice Detection with Natural Language Processing

For the future of the project, we will get more data, labels and classifications for our model. Certain texts we could analyze in the future could be homophobia.

Share this project:

Updates