Electronic notes are generated, allowing the user to quickly access points in the video as well as create searches on the key words
Start page, the user pastes a youtube video of the lecture
Sometimes in class, you're so focused on writing notes that you have trouble keeping up with what the lecturer is actually saying. Or perhaps students with accessibility needs may feel that their conditions create obstacles with their ability to learn. We aimed to make a program that enhances the overall student education experience, as well as lessen the intimidating gap that some students may feel at school.
What it does
Our project acts as a resource for students, especially those with accessibility needs. It allows a student to record a lecture session with a camera, and later upload that video recording to our website, which it will process and create key note slides when the chalk / whiteboard was most full. It uses machine learning to convert the lecturer's hand written notes to electronic text, which is beneficial particularly towards those with conditions like dyslexia and ADD. Furthermore, our program will highlight key words from the slides, and the user will be sent to the point in video where that word appeared first on the chalkboard if they click on those words, creating a resource which allows students to skim through notes very quickly.
How we built it
We used a dual server arch to deliver a front-end and back-end. The front-end delivers the content to the user through a single page application on the MEANest stack you can find (the MEAN stack, to be specific). The back-end is a complex multi tiered application which takes videos and process the video frames through a handwritten detection API from Microsoft that reads the lectures' handwriting. We aggregate these frames to convert the handwriting to text as well as find key words within the frame that can be access from the front-end.
Challenges we ran into
Integrating the Angular 2 front-end with the Python back-end was a challenge, which brought us to learn how to do computer networking tasks like GET and POST calls. Also, figuring out how to use the machine learning API like the Computer Vision Handwriting tools from Microsoft was a challenge, bringing us to conduct effective research together to learn how we could use it for our project.
Accomplishments that we're proud of
We are very proud to make a project that uses machine learning to solve a real world problem, and we all personally believe that the processing of our project, which is analogous to a Ctrl + F search for videos, is very cool and exciting.
What we learned
We learned a lot about how to work with version control software with other teammates, as well as managing a project with others. We also learned about researching what API is available online to use for you to implement your program features.
What's next for AutoNote
We believe that features like publishing AutoNote online, or doing more complicated machine learning recognition tasks like diagram recognition and simplification would be exciting and interesting to implement for the future of our project.