Inspiration
Lucas came up with the idea of working with machine learning and optical flow this hackathon so that got us thinking about what we can do with those. We ended finding large datasets on a Stanford course's website -- Convolutional Neural Networks for Visual Recognition (CS 231n). The first one was UCF101: an action recognition data set of realistic action videos with 101 action categories, which is the largest and most robust video collection of human activities. Videos there range from Applying Make Up to Basketball Dunks to Pizza Throwing.....to Handstand Pushups............and even Mopping Floors! Thank you to the University of Central Florida, Center for Research in Computer Vision for making this data available.
The second dataset we found was **Brown University Serre Lab's HMDB: Human Motion Database **which contains videos of 51 different human actions.
What it does
Though this project has yet to fully materialize, we put our best effort forward with the aim to see how accurately our Convolutional Neural Network (CNN) could classify videos of human activity.
How we built it
Nitzan worked with OpenCV on processing the videos using optical flow functions to convert them into adequate material for the CNN to train on. Lucas cooked up the CNN utilizing TensorFlow, to train the neural network and then make predictions on new data.
Challenges we ran into
Aside from learning how to use OpenCV and the concepts and implementation of optical flow, Nitzan ran into annoying bugs surrounding OpenCV. The worst of which was the inability (for hours) to import cv2 in Python. Lucas struggled with creating a Python pickle due to the large amount of data involved. And of course, he also learned a lot from this experience.
Accomplishments that we're proud of
Learning a bunch!
Built With
- machine-learning
- opencv
- python
- tensorflow
Log in or sign up for Devpost to join the conversation.