Being able to attend Hack the North, the largest hackathon in Canada, inspired us to create something to help others in hopes of leaving a positive impact on society. We decided to create a program targeted towards helping those living with photosensitive epilepsy because despite its severity, it lacks coverage. After intensive research, we learned that photosensitive epilepsy is a large-scale neurological disorder that can be triggered by flashing lights and patterns. It has been recorded that about 50,000 people worldwide have died as a result of unexpected visual triggers via technological devices. We fear that the death rates caused by technological triggers are bound to grow if the current lack of awareness continues. As we researched, we quickly realized that many videos lacked forewarnings despite containing potential seizure-inducive content. In this, we decided to program an easily accessible chrome extension as a safety measure for those who are susceptible to seizures in hopes of saving tens of thousands of lives annually.

What it does

Our Chrome Extension can take any YouTube video and warn the user of segments that could be hazardous for those with photosensitive epilepsy. Our program is activated when users access YouTube videos, sending the URL to our python back end application, breaking the video up into frames, and then checking changes in lights and patterns in order to form a collection of values representing severity of the trigger. This is then recorded to a database so it can be stored instead of being processed again and the numbers are sent to the front end to be displayed to the user as a graph. It will also give pop up warnings if a section of the graph (the values we collect) is greater than the range of values we’ve concluded to be nonhazardous.

How I built it

We started by discussing the framework and complete stack of the project, then divvying up the work so we were each able to contribute in our own specialized fields while still learning and collaborating with each other and the other tasks.

Lydia: I worked on the front end development of the Chrome Extension, creating a dropdown bar users can interact with to control our program. I also worked on many of the design aspects of our project such as the creation of our logo and both the functionality and visual parts of the UI. I used HTML5, CSS, and JavaScript to develop the Chrome Extension and I worked with Krita to design the logo.

Henry: I worked on connecting all the different parts of the project, managing the database, and managing the YouTube videos from the front end. Connecting everything using the database was done with firebase. A large portion of this was sending the URL to the back end and getting data from the back end to the front end. Managing the YouTube videos with pausing and creating pop-ups was done with JavaScript.

Nizar: I was working on video processing. I took the URL, downloaded the video, the broke it up into frames. All these frames were converted to be compatible with the frame processing back-end. I used pytube and cv2 to do all of these. I also worked with using cv2 to detect edges in the images to later be processed for patterns. Finally, I used JavaScript to take the data from the database and convert it to a graph for the user to see.

Adam: I was working on the back-end, so taking in the frames and comparing frames that go right after each other and looking for changes in colours between the pixels along with looking for patterns such as stripes, both of which are large contributors to photosensitive epileptic seizures. For colour, we went through and measured the "distance" between each colour using the rgb values as coordinates. Stripes were more difficult, as we had to split the grid into a lot of small sections and used machine learning to take the points which seemed like edges to colours and drew a line of best fit out of them, then checking the accuracy of that line so we knew how much to consider that section. Those two factors were put together into a score for every frame and was then sent of to the database to be processed and used later on.

Challenges I ran into

Lydia: some challenges I ran into was working with developing a Chrome Extension for the first time and creating an application that can be visually pleasing for users.

Henry: A large challenge for me was actually understanding the firebase documentation. I had to go through a lot of experimenting to figure things out, the documentation itself was quite difficult to read. Also, dealing with the different scopes between the chrome extension and the main content took a lot of time.

Nizar: A couple challenges came up as we first learned to use cv2. Mostly it was getting around the syntax and different functions that could be used to break the video up and modify the frames for easy processing.

Adam: The main challenge I had was line detection, specifically being able to decipher between different lines on the screen. This was made much easier by splitting up the screen then looking for 1 line into each of those sections to find all the stripes.

Accomplishments that I'm proud of

Lydia: I’m proud of being able to learn and create a Chrome Extension within the weekend and of being able to troubleshoot various problems in the code to produce work I am satisfied with.

Henry: Honestly, I'm just proud I got my part to work. It's like the glue of the project, and seeing everything go together and actually meld into a singular project was really cool.

Nizar: I'm proud of being able to do all the processing using just python instead of a lot of external software. Especially considering how much trouble it was to first start using the python modules needed, I was able to properly use them within a day. Also, I'm proud of learning javascript in a day to create the graph for the extension.

Adam: I'm proud of being able to process all the pixels and patterns in a frame in a reasonable amount of time to keep up with the user. I was also proud of making a good pattern recognizer, something I had trouble thinking of before the hackathon and spend a lot of time figuring out as I was here.

What I learned

Lydia: I learned that passion and coffee are powerful engines capable of fueling me through no sleep and minimal naps. I’ve also learned that communication and being able to discuss ideas and challenges with others is much more effective.

Henry: I learned how to develop a chrome extension. A lot of this was done through asking the google engineers, who were very helpful. I also became much more familiar with firebase. Most importantly, I learned about the limits my body has with minimal sleep.

Nizar: I learned how to code in JavaScript to create the graph in the front end, and I also learned how to use python cv2 and some other smaller modules to make tedious jobs much easier.

Adam: I learned more about machine learning, something I had no idea how to use before. I needed some way to detect all the stripes in an image, and I found a way to take multiple points and construct a line from them. Since one line wasn't enough, I realized breaking up the problem into smaller squares allowed for the program to detect multiple lines going in multiple directions as they were more independent.

What's next for SeizeControl

Some videos--mostly music videos--actually have extra protection making it much more difficult to get the video’s information. We’d hope to be able to make SeizeControl compatible with more videos and capable of collecting information even from videos that make it harder to get access to it. Also, the extension could be extended to detect any repeating patterns, instead of just stripes. We’d also hope to make the extension compatible with not just YouTube videos but with the screen as a whole, essentially broadening its analysis range and stopping seizure-causing-actions before they can be a hazard. Finally, being able to process the video in real time to make it more user friendly would remove possible inconveniences.

Share this project: