Mental health and online harassment are major, relevant issues we face in our current society. We believe that everyone should have the power to access content on the internet without experiencing discomfort or harassment of any sort.
What it does
We wanted to create a tool that let's users with specific traumatic experiences to filter out sections of webpages with offending content. The Chrome extension has a simple UI that allows users to choose which types of content they would like to avoid as well as a whitelist and contact button.
On page load, content blocks are covered so that the user is still able to assess the context of the site before deciding to leave or stay. They can also click on the covered sections to reveal as they go. The script searches through the entire DOM looking for elements wherever they may be on the page. Sentiment analysis was implemented to further determine what content was malicious.
The scripts also have a mutation observer, so incoming content is being actively filtered (in the case of Facebook chat, Twitter etc.).
Firebase was integrated in order to allow users to flag words and phrases that have no been picked up.
How we built it
Challenges we ran into
Speed and efficiency of finding offending words and phrases. On pages with lots of text content this can be a lengthy process. After several redesigns, we have designed an algorithm that can find and replace content quickly enough to make the experience practical. Filtering intelligently was also a big challenge, so we implemented sentiment analysis to determine what content was truly malicious.
Accomplishments that we're proud of
The extension works really well on most common sites we tested that may have offending content. The experience is seamless and subtle enough to use in every day life.
What's next for soothe
Release to the Chrome App Store!