We heard the story of a survivor of an abusive husband and how she couldn't see the warning signs of emotional abuse early on. We wondered if we could make a tool to catch these warning signs up front.

What it does

The web app would take in text or transcribe an image/audio clip into text and classify it as abusive or not abusive. The result would be shown to the user along with a graph of the probabilities for each category (abusive/not abusive).

How we built it

I used Python + Jupyter notebooks on my own machine and a more powerful AWS EC2 compute instance to clean the data for a Fasttext unsupervised model with logistic regression classification. Linda wrote Python code on the EC2 instance to transcribe images/audio into text, while also setting up the EC2 resource. Wayne combined Javascript, HTML, CSS, and Python into a web app that utilized the Flask microframework, testing his work with Codepen and PyCharm.

Challenges we ran into

In the end, we couldn't bring together our separate work into a final web app. We had all the pieces we needed! All we had to do was put them together, but our apache web server with modwsgi that we hoped would cooperate with Flask, did the opposite. Also, we ran into various and sundry issues like AWS SDK problems, PyCharm not working, and long training times that all delayed our progress.

Accomplishments that we're proud of

Despite the fact that we didn't realize a cohesive web app, we are extremely proud of the components we built and how well we divided up the work amongst ourselves.

What we learned

Flask, Python, Javascript, teamwork, brainstorming, and dividing tasks to leverage strengths.

What's next for Abusive Language Analyzer

Make it a functioning web app. Add links to resources and articles on recognizing emotional abuse when a person receives their results. Train a more nuanced model that can be personalized. Incorporate a chat bot. Extract important keywords and topics.

Share this project: