Our original idea was to use Sentiment Analysis to determine whether or not Twitter users were bots, but we soon realized that we could use the same technology for a much better purpose: Helping to make the internet a more friendly place.
What it does
ConTroll searches through every tweet in which the current user is mentioned, and runs each tweet through Microsoft's Azure Machine Learning Sentiment Analysis and IBM's Watson AlchemyLanguage API, as well as our own algorithm, to determine whether or not the message of the tweet is offensive. If the tweet is determined to be offensive, the author of that tweet is put on a temporary list, and after all tweets have been analyzed and the temporary list has offensive users, the current user can block those trolls with one click. (And should there be a mistake of blocking users by accident, there is a feature to unblock the most recent batch)
How we built it
Challenges we ran into
One major issue is that the sentiment analysis would give low sentiment ratings to empathetic statements, such as "I'm sorry for you" or "That sucks." This wasn't the desired behavior for ConTroll, so we have to implement our own algorithm on top of the sentiment analysis to increase the accuracy of the app.
Also, we had a pretty major merge conflict and had to manually re-add a lot of code at 8am, so that was pretty challenging too.
Accomplishments that we're proud of
For one, actually finishing any app during any hackathon is pretty good, so we're proud that we at least finished this. Also, we're proud of increasing the accuracy of the sentiment analysis with our own algorithms, and arguably the most important, we're very, very proud of our loading animation.
What we learned
We learned how to use sentiment analysis to analyze text, as well generally using Flask as a framework for webapps.
What's next for ConTroll
Allow users to change sensitivity threshold of algorithm and add custom blacklisted words