In this day of age, we're all so connected to the digital world, yet we don't realize the privilege that most of us posses when it comes to consuming digital media and any other type of that media for that matter. However, there are individuals who have to be very careful with what they watch and how they go about their day due to the fact that they have photosensitive epilepsy. One of those individuals is our good friend Krista, Krista suffers from photosensitive epilepsy and is very careful with how she consumes her digital media. Recognizing this, we decided to make content streaming services accessible for individuals like Krista.

What it does

We created a chrome extension lumi that detects when you're on YouTube and gives you the option to click the extension to normalize the YouTube video. For this to happen, gleam takes in a video and passes it to our in-house AI, loom, that takes care of normalizing the video to take out any content that might be seizure inducing.

We also created a companion app, flicker, where you can browse for videos on your phone and normalize them from our app.

How we built it is divided into three different main services, our chrome extension, lumi, is built according to the Google Chrome Extension specification, and checks for a YouTube link, upon clicking the extension icon, we send the YouTube video to our api, gleam to handle, which receives the video to then pass to our AI, loom. Loom is a deep learning application built in-house with a tensorflow tech-stack, using an auto-encoder, loom is able to detect any seizure inducing frames. After normalizing our video, loom sends the video to gleam for gleam to send it back to us where we display the newly normalized video on a separate tab.

Flicker works the similarly to lumi in it's entire process, just that it is built on react native and introduces certain functions like searching for video, and being recommended similar videos.

Challenges we ran into

Figuring out how to develop Google Chrome Extensions are a very tricky thing. The documentation is there however community support was lacking in our opinion.

We also had issues exposing to a web server for loom to receive and return a video .

Accomplishments that we're proud of

We're very proud of the fact that we were able to get everything together and have an AI built from the ground up working and processing our videos correctly.

What's next for

In the future, we need to tackle other mediums of content distribution that exist out there. An idea of ours is to built similar functionality as a roku extension.

Share this project: