Inspiration

Rapidly flashing lights is not an uncommon phenomenon in video games, movies and YouTube videos. It usually goes unnoticed or causes us to look away for a second. But there’s a dark side to these bright lights. What’s just mildly annoying to some can be strongly aversive, even dangerous, to many with photosensitive epilepsy (PSE), triggering seizures. And yet, most videos don’t come with a warning.

It’s a very common disorder among kids and adolescents, but is often undiagnosed because there are no clear symptoms.

It’s what inspired us to create FlikcerApp.com, a website that tests for and removes epileptic content by showing the time stamps of potential triggers and offering a safer version of the video to download.

What it does

If you want to resolve or test any video on YouTube for possible photosensitive epilepsy triggers, simply copy and paste the url into the text field and simply, click ‘Test’.

If you want to resolve or test any video of your own for possible photosensitive epilepsy triggers, click the ‘Upload’ button and select your video file. Wait for the video to finish uploading to our servers and then, simply, click ‘Test’.

Flikcer will read your video, analyse all frames and detect the triggers. After it has done its analysis, Flikcer will output the number of possible triggers in that video. You can now look at the specific time stamps of those triggers by pressing the ‘View Timestamps' button.

If you want to download a safer version of your video, you can click the ‘Create Safe Video’ button. Flikcer will ask for the number of iterations you want it to run. Each iteration removes new epileptic frames that may have been created after removing the previous frames. You can select among 'High', 'Low' and 'Medium'. Flikcer will run that many iterations. Once the video has been made, you can simply click ‘Download’ to download your safe video.

How we built it

Photosensitive Epilepsy (PSE) is a form of epilepsy where seizures are triggered by visual stimuli, such as flashing lights and high contrasting geometric patterns. Both natural and artificial light may trigger seizures.

A potentially harmful flicker occurs when there is a pair of opposing changes in luminance (i.e. an increase in luminance followed by a decrease, or a decrease followed by an increase) of 20 cd/m2 or more. This applies only when the screen luminance of the darker image is below 160 cd/m . Irrespective of luminance, a transition to or from a saturated red is also potentially harmful.

A sequence of flashes is not permitted when both the following occur 1) the combined area of flashes occurring concurrently occupies more than 25% of the displayed screen area 2) the flash frequency is higher than 3 Hz.

Flikcer looks at every frame of the video, and compares the brightness values of each pixel to their corresponding brightness values in the next frame. Depending on whether the change is positive or negative, Flikcer makes two groups of pixels. Flikcer then accumulates the values for both groups, until the number of pixels exceeds the minimum limit of 25% screen area. Flikcer then calculates the average change in brightness level for every consecutive frame. Flikcer now scans these “average changes” for a change of more than 20cd/m2 with frames of opposite signs. Every one of these changes is a flicker. Flikcer checks for a harmful flash by calculating the frequency of these flickers. If it is more than 3Hz, it terms it as a potential trigger.

Flikcer's algorithm was created using Python. The video downloading step is handled by Youtube-dl and Pafy. We read the video using NumPy and OpenCV. With a vectorised implementation, we read the frames. Afterwards, we can handle the creation of the safe video with ffmpeg.

To make the algorithm customer-usable, we published it using Heroku, and celery as the background server. We managed the bridge between, front-end and python using Django. We used Google Firebase as the backend storage for the uploaded videos and videos to download.

Challenges we ran into

Initially our algorithm took approximately 3 minutes to test every minute of the video file. This was extremely slow for a real world implementation.

It was also extremely hard to make sense of all the concepts mentioned in the paper, which involved complicated statistical and mathematical algorithms we had to implement.

When implementing the real life website, we had to change the OpenCV implementation to a headless version, which could work with Django and Celery.

We also had to manage server space which couldn't handle the entire video. We had to convert the video to mini-batches of data and upload it simultaneously to the Firebase server.

Accomplishments that We're proud of

Our algorithm works in real time and can read videos upto 10 minutes long within 2-3 minutes. It works extremely well and has been mixed with a very user-friendly interface. We also managed to create a chrome extension for live implementation, running in the background and reading the current tab URL.

What we learned

We both had previously never worked with a Python based web application before. We therefore learnt a lot about Django and how various frameworks and libraries are manages. We learnt about API requests and background tasks. We also learnt a lot about video and audio reading and how to make your programs work faster by implementing vectors.

This project covered all bases of modern web development - frontend with great UI/UX, backend, python integration and server communication as well as an algorithm implementation.

What's next for Flikcer - Web App for Photosensitive Epilepsy Resolution

We would like to get our website reviewed and tested to check the real world applicability and we hope to help the millions who suffer from epilepsy. We are also researching other types of photosensitive epilepsy triggered by red shift saturations and color variations. We hope to implement these into our algorithm.

Built With

Share this project:

Updates