We wanted a way to reprimand cyber bullies and other harassers without forcing the targets of their abuse to get involved.
What it does
Conscience receives and responds to SMS and MMS messages. If the message contains abusive, rude, or otherwise inappropriate content, our app responds with a firm reprimand. This goes for crude text as well as unsolicited graphic images.
How we built it
We utilized Twilio as a platform to receive messages for our demo, as well as a mean to respond to perpetrators. We parse the text of the message using the Bark.us partner API to determine if the contents of textual messages are abusive. We use Clarifai’s nsfw model to determine if received images are graphic or inappropriate.
Challenges we ran into
We started this project with a lot of ambition - we wanted to make a service that intercepts incoming messages, and offers the user a chance to accept or reject them. Upon rejection, we would trigger the chiding response. We settled on implementing a custom messaging app to accomplish this goal, but with our team’s limited Android development experience and the looming deadline, we decided to work towards a proof of concept model instead.
Accomplishments that we're proud of
We’ve created a bot that can respond to and appropriately reprimand inappropriate communications. Lack of accountability is a major issue that perpetuates online harassment, and while receiving a call from their conscience likely won’t stop an abuser from picking different targets, we believe it can give them pause, all while keeping their would-be victims free from the burden of responding themselves.
What we learned
We improved our skills at combining and synthesizing various API’s and platforms to create a cohesive product. We developed a knack for mimicking curl requests in Python. We also learned our limitations - for one of us, picking up Java in 36 hours proved infeasible.
What's next for Conscience
We’d love to develop this concept into a full fledged messaging app, or pitch it to various messaging service providers. With the ability to prompt a user to block incoming harassing messages and block abuser’s numbers entirely, we believe this kind of application of image recognition and text analysis services could effectively shield users from unwanted interactions. By simultaneously reprimanding harassers, we believe we can reduce future incidents of harassment.