We heard the story of a survivor of an abusive husband and how she couldn't see the warning signs of emotional abuse early on. We wondered if we could make a tool to catch these warning signs up front.
What it does
The web app would take in text or transcribe an image/audio clip into text and classify it as abusive or not abusive. The result would be shown to the user along with a graph of the probabilities for each category (abusive/not abusive).
How we built it
Challenges we ran into
In the end, we couldn't bring together our separate work into a final web app. We had all the pieces we needed! All we had to do was put them together, but our apache web server with modwsgi that we hoped would cooperate with Flask, did the opposite. Also, we ran into various and sundry issues like AWS SDK problems, PyCharm not working, and long training times that all delayed our progress.
Accomplishments that we're proud of
Despite the fact that we didn't realize a cohesive web app, we are extremely proud of the components we built and how well we divided up the work amongst ourselves.
What we learned
What's next for Abusive Language Analyzer
Make it a functioning web app. Add links to resources and articles on recognizing emotional abuse when a person receives their results. Train a more nuanced model that can be personalized. Incorporate a chat bot. Extract important keywords and topics.