In 1997, an episode of Pokemon aired in 4.6 million homes in Japan. This episode had a scene with many quick flashes that caused 685 children to be hospitalized for photosensitive epileptic seizures.
In 2008 an online community known as 4chan posted seizure-triggering images and videos on epilepsy related forums and websites.
In 2012 a promotional video for the London Summer Olympics was reported to trigger seizures in people with photosensitive epilepsy.
The threat posed to those with epilepsy in each of these incidences could have been significantly mitigated with software that detects when potentially hazardous elements are present on a screen, and blocks them. This is what Project Julius does.
The name comes from Julius Caesar, the roman emperor. Julius Caesar was an innovative leader who has been post-diagnosed with epilepsy. We aim to innovate the seizure prevention measures in place for video on computers.
How it works
Project Julius monitors what is displayed on the computer's monitor and blocks quick image changes in order to prevent seizure triggers.
Using CamTwist and OpenCV to capture the images on screen. We then process them using a histogram analysis in broken-down regions of the screen to look for quick changes. If high-frequency flashing is detected, a window is pushed above all other open windows, warning the user and covering the threat.
In order to analyze the display to figure out if there are flashes that we should block, we perform a histogram analysis on the image. First, we take 2 consecutive frames and then divide them into a 10 by 10 grid of pixel regions. We then analyze the colour spectrum and create a histogram representing each region. Then, we evaluate the integral of the Hellinger Distance to find the Bhattacharyya Coefficient. This distance quantifies the similarities between the regions and allows to easily see major changes in the image such as a flash. If there is a 95% change in the image, determined with the Bhattacharyya Coefficient, then we declare it to be a "flash event". We then compare this result to the previous 10 results for each regions, if 60% of the frames contain "flash events" then the seizing guard is triggered and the window is minimized. We analyze the display at 30fps, and require of 60% of the last 10 frames to contain dangerous flash events to declare a possible triggering event. This makes Project Julius able to catch possible seizure triggering events after only 0.2 seconds.
Challenges we ran into
The first challenge we ran into was being able to complete the capture, analysis, and blocking in real-time with zero lag. We altered our algorithm and refactored most of our existing python code, ultimately accomplishing precise analysis with minimum resource consumption.
The biggest challenge we ran into was being able to accurately detect when an image was flashing repeatedly, rather than an object moving or a scene changing. A lot of things seem like they could be flashes at first, but are clearly not when watched by a human. By comparing multiple frames one after another we were able to better analyze what is happening on the display.
Accomplishments that we are proud of
Our proudest accomplishment is learning about epilepsy. In order to understand the requirements of our project we had to learn a few things about epilepsy including triggers, effects, and number of people affected. However, we wanted to go further. Since one of our team members has had family experience with epilepsy we decided to learn more. We learned about multiple different kinds of epilepsy, triggers for different kinds, statistics, long-term effects, symptoms, and the demographic of affected persons. In our opinion, the coding may have taught us new ways to accomplish our tasks, but we think the knowledge we know have will have a greater impact on our lives. Epilepsy is something we have have to deal with first hand in our life be it friends, family, or anyone in public.
What we learned
During the creation of Project Julius we learned a lot about Epilepsy and its causes. Using online resources we discovered how many people it affects, common triggers, long-term effects, and large incidents in public.
We also learned a lot about how to process large images (upwards of 1080p) quickly and effectively using histograms of regions on the screen. The changes in the histogram values indicate a change in content, a large difference indicates a flash or scene change. We then had to further our analysis to compare multiple changes to detect the rate at which the image changes, and thus determine if we should block it.
Our initial approach to capturing the screen was to use the low-level frame buffer built into Linux. We spent a few hours learning how to access and process the frame buffer but ended up deciding not to go with it. We felt that using something built into Linux would greatly decrease our potential reach so we changed our screen capturing method to be more portable and cross-platform by using OpenCV and native desktop capturing applications that can pipe into Python. We found multiple programs for the different operating systems such as CamTwist and ManyCam.
What's next for Project Julius
Project Julius is in a state such that it can be applied in many situations, however it isn't ready to be released fully. We hope to continue development and create a fully refined and deployable application. We would like to contact Epilepsy Action in order to verify that our program would be effective and available for as many people as possible.