E-Learning is widely becoming popular with tonnes of online courses such as Coursera and EdX. However since these courses are for a general wide audience, they are sometimes harder for few people to understand and follow. We wanted to build something to make this a closed loop system by monitoring the student's response or how well he/she is able to follow the material. We incorporate this feedback to dynamically change content according to the students ease of understanding.
Also, since majority of the time is spent on computers, it is increasingly easy to develop health problems such as headaches, eye-strains and other issues. We also wanted to keep a watch on such conditions and inform students and warn them accordingly.
What it does
The system monitors the facial expressions of the user via the laptop's web cam and analyzes the emotion of the user from it. Based on the emotion, it then identifies if the person is able to follow the course content well, if its interesting etc. This feedback is then used to decide which course topics will be elaborated in more detail or skipped in the upcoming content. If a student seems to be stuck on some material, a friendly pop up is shown to ask if he/she needs help and accordingly some hints and more detailed explanation is loaded. We therefore provide content that's personalized and paced at the right rate according to the learning curve of the user. This can also be used in other domains to obtain feedback regarding users response to web content. Also, the blinking rate of the user is monitored to figure out eye-strains and sleepiness. The algorithm also identifies yawning and based on a threshold applied to a combination of such parameters, it determines if the user is tired, sleep deprived or is stressed and accordingly suggests to take a break or go rest. Long periods of working in front of the computer are also noted and warnings are generated to prompt users to take breaks. The system tries to enforce the highly efficient and popular "Pomodoro Technique" of interleaving certain minutes of work and breaks to ensure maximum efficiency.
How we built it
The whole system is build in two parts. There is a machine learning algorithm which performs face detection, recognizes facial features and emotions from them as well as identifying blinking and other parameters mentioned above. There is a cloud server and backend which takes the feedback from the algorithm and also based on the amount of time the user spends on different contents - identifies whether the user is comfortable with the learning pace or not and accordingly changes the content dynamically. We use Google Text to Speech for generating loud alerts to wake people up in the middle of study sessions and also to provide interactive feedback and warnings.
Challenges we ran into
Setting up and learning OpenCV was challenging. The machine learning algorithm for detecting facial expressions was hard and not up to the required level of accuracy. We had a tough time extracting the required features and processing them. System integration was also challenging. Setting up dynamic changing of contents and synchronizing them with the feedback obtained took a while.
Accomplishments that we're proud of
Putting together a project with new stuff like OpenCV and machine learning algorithms which we hadn't dealt with before was extremely satisfying.
What we learned
We were new to Face Recognition technology, specially OpenCV and learnt a lot about it.
What's next for EduSmart
Incorporate some more detailed analytics such as stress levels. We would also like to make this generic to obtain emotional feedback for all websites the user visits and generate a real-time stress level graph. This would pave the way to personalized web content which is more relevant and useful for the user.
Log in or sign up for Devpost to join the conversation.