Inspiration

When I (Dina) was trying to come up with issues that happen to people and that I can solve with tech. I remembered three girls in my school and my own aunt who get seizures and started to look online to see if something already exists, turns out there’s only a chrome extension for slowing down gifs and that’s it. I then find out that these things are triggers of discomfort, which lead to seizures. So I decided to integrate the new technology of machine learning in order to speed up the process for things like video and determining whether the video would have content that would discomfort those social media users. The team came up with ways to refine the algorithm and what it would do in order to be able to make it in the time that we have. (My first plan was to make the algorithm detect videos, but that takes a really long time to make. Creating an algorithm through google's resources for image recognition is much easier) The name Monitum comes for the latin word for warning.

What it does

An algorithm that will read videos and determine the likelihood of the video causing a seizure through it's accuracy. we used 2 videos to train the model. 1 for triggers: https://drive.google.com/file/d/1OWMRqe7vicKMpqWYfGgohuBp5SQEBEv_/view?usp=sharingand another for non-triggers https://drive.google.com/file/d/17Rh7jr6A90oSz62YYKaQaUFVIqhAwbEO/view?usp=sharing The csv file at the github repo is probably not much use to you, so if you want to see it in action, contact us to give you access to the AutoML link. Then, you would just add and image and the model will automatically classify that image as flashy or not flashy. This model will be used in websites with videos. It will pull frames from videos and classify them as flashy or nonflashy. If it finds 3 different examples of flashy images right next to each other in terms of time, with 80%+ accuracy, it will flag that image as likely to cause problems for people with photosensitivity. We also made a website to showcase our work and as a landing place for future sponsors to view and learn what we're all about.

How we built it

We built it using Google Cloud Vision's API (AutoML) which made creating a dataset and training the model a fairly smooth experience. We built the website using our own HTML, CSS, JS (boostrap and JQuery).

Challenges we ran into

We initially wanted to create an algorithm to scan through all videos on a page and then determine if they have triggers in then through a certain accuracy. This proved to be difficult, however, so we quickly switched gears and decided to make the algorithm detect images, which we can then expand in the future to take frames from videos on a site and use those images with the algorithm.

Accomplishments that we're proud of

We're really proud of the fact that we wrote the website with our own css and designed it ourself. We're also really proud of making a working machine learning algorithm (many of us have never written an image recognition algorithm before this!) We're also really proud of the fact that our product solves real life issues that affect thousands of people around the world and has many ways of being expandable while still being open-source.

What we learned

We learned a lot about teamwork and it's importance in the creative and technical aspects of creating this project. We also learned a lot about the epilepsy community (especially since one of our team members has a family member who is a part of that community) through our coming up with ideas/market research.

What's next for Monitum

We're looking to expand and include more triggers such as those experience by people with PTSD and anxiety. We're also looking forward to expanding the model to compare real-time frames from videos at a quicker pace, so that it can be used for social platforms with longer videos like facebook and youtube.

Built With

Share this project:
×

Updates