Inspiration

The internet is a hostile place, home to a wide spread of triggering and unfriendly images. To combat this, users manually type out the words 'tw:' to warn others of the content. However, a majority of users ignore these warnings and immediately look at the image containing disturbing content. Clearly, there is room for improvement, especially for websites that do not have any filters in place to warn users of this sort of content. Browse Safe is aimed to be able to detect customizable trigger warnings in images so that the user will receive a full screen warning if they want to continue to see the content or not. Our goal is to create a safer internet to remove these triggers and allow users to surf the web safely!

How we built it

Browse Safe is a chrome extension created with javascript. We first scrape all the images on a webpage and use the Microsoft Azure computer vision API to fetch a description of objects in the image. We then check if any of the images have any of the inputted trigger keywords. If there are, a full page overlay is launched and warns the user of these triggers. It is then up to the user to decide whether to continue on the webpage or not.

Challenges we ran into

Accomplishments that we're proud of

Having no prior experience with any of the technology we used in the past 24 hours, we are incredibly proud of the result and being able to create a functional project. We're also very proud of our teams ability to work together as this was our first hackathon working together.

What we learned

Our team learned a bunch! Nobody on our team has any prior experience with creating Chrome extensions, much less using Microsoft Azure API. This was an incredible learning opportunity that allowed us to utilize our newly found knowledge gained at the Azure workshop as well as apply it to create positive social change. We also learned more about injecting elements onto a web page and had a lot of practice debugging.

What's next for Browse Safe

We hope for the user to be able to input their own keywords for triggers. Currently due to time limitations, we have hardcoded the trigger warning words to match the type of output responses that the Azure API returns to us. Also, currently Browse Safe automatically scans every website and pops up a loading screen even if the user knows that the website is safe. In the future, it would be great if the user could toggle the extension on for unknown websites or for the user to be able to add trusted websites. Another possible future implementation is instead of having an overlay over the entire screen, have an overlay only on the image with the trigger keywords that way the user can still see what's on the website without having to worry about seeing something that they don't want to.

Built With

  • azure-javascript
Share this project:

Updates