Inspiration
Professors in large lecture halls refer to students in many different ways: the gentleman in the black hoodie, the lady in green, "you over there", or "you in the back." Sometimes, they just point at students-essentially every variation except their own name. Such methods of referring to students can be ambiguous, especially if more than one person raising their hands in the same area matches the criteria. More importantly, this undermines the student-professor relationship, which can often be the difference between a student seeking help or not.
What it does
MNINH detects students raising their hands in a class. Through face recognition technology, MNINH displays the name and location of the specific student in the class who has a question or is willing to answer one so the professor can call on him or her.
How we built it
- Trained Azure with the profile database under a specific personal Group ID
- Instantiated a video capture with OpenCV
- Detect hands and fingertips frame by frame
- If a hand that is raised is found, isolate the associated face
- Compare the face to those in our database of enrolled students via trained Azure
- Display the student's name, clearly matching the text with the face.
- Remove the student's name on the screen if the student lowers his/her hand.
Challenges we ran into
- None of us had any previous background in computer vision, much less Azure or OpenCV
- Installing and using the software.
- The Handy library would frequently misidentify faces as hands
- To consistently associate a raised hand with the student's face
- Webcam quality and lighting were substandard in many campus locations
- Fairly selecting a student had to consider the diversity of our university
Accomplishments that we're proud of
- Successfully trained azure with profile pictures for personal identification.
- Integrated azure api to identify students names in arbitrary images.
- Integrated open CV for hand movement recognition in a web cam.
- Correctly identified an individual who has raised his/her hand with minimal time latency.
- Successfully printed the name of the individual who has a question on the screen.
What we learned
- How to integrate Azure's face API
- How to work with OpenCV
What's next for My name is not HEY
- Automatically pulling the images from the Andrew database
- Dealing with multiple rows of people
- Supporting stadium-style seating for larger rooms
- We plan on adding visual cues onto the video stream. These include a yellow box around the student who is raising his or her hand, which will address the issue of the professor being unable to see students in the back raising their hands. Another visual cue would be to inform the professor how long he or she raised his or her hand by highlighting the aforementioned box in a color ranging from green to red. Green indicates that the student has just raised her hand, while red means the student has been waiting for a while. Furthermore, the box will also display an emoji next to it that captures the student’s degree of confusion. To do this, we will train Azure with data sets of confused students and their faces. In addition, we plan to train Azure with all students’ pictures from SIO. This will require ID Services’ permission. Finally, the app will track students’ participation by assigning a number next to his or her face: the number of times he or she has raised his or her hand in the past week.

Log in or sign up for Devpost to join the conversation.