The inspiration for this app was born from a desire to create a more realistic online learning environment to benefit both teachers and students. This is achieved through monitoring students during online lectures and determining when they are paying attention through their facial reactions. With this information, teachers can learn and examine the sections of a lecture which have lost most students attention, and work towards reiterating those concepts to ultimately benefit the students.
What it does
Uses computer vision and machine learning to guard and promote student engagement in an online classroom.
How we built it
- User Interface: React.js
- Infrastructure: Google Cloud Platform, Node.js, Flask
- Analytics: R, Python
- Model: Scikit-learn
Challenges we ran into and approaches
- Defining methods of detecting student attention
- Emotion classification
- Bounding boxes around eyes
- Head orientation diagnosis
- Designing server/client architecture
- Integrating with browser extension
- Using websockets
- RESTful API
Accomplishments that we're proud of
- Training a model on a dataset using labeled crowdsourced data and data we've labeled ourselves and achieving approximately 89% out of sample accuracy (inferred through leave-one-out cross validation)
- Handling and updating live data in real time by leveraging HTTP and the browser
What we learned
- Collaboration as a team of engineers with different technical backgrounds
- How to use cross-origin resource sharing
- API calling to Cloud Vision and Compute
- Server and client networking
What's next for Oculearn
We envisioned Oculearn as an academic platform which aided teachers in providing an optimized learning experience for their students. Thus, we would add the ability to share a screen sharing feed along in sync with live audio. In addition, we would iterate with data retention using MongoDB that enables data analytics and continuous online learning.
Thank you to @e-drishti for providing a label dataset.