Fully Automatic Multi-Lingual Event Check-in Software
Seeing the check-in line for Big Red Hacks after our bus had arrived made us believe that there should be an alternate way to check-in many people for an event, without the need of more than two staff members doing so.
What it does
By using facial recognition and optical character recognition (OCR), FAMLECS automates and streamlines the check-in process. Both text-to-speech and speech-to-text are also implemented so that users can check themselves in without ever having to tap, click, or press a button.
How we built it
The entire system is built using python. We used the pygame library to display the infographics along with the spoken instructions. The infographics mostly aid the multi-lingual support that is built into FAMLECS.
Challenges we ran into
Most of the challenges that we ran into were quickly resolved. They had to do with camera calibration for the facial recognition and OCS, graphics not being presented in the right order, etc. Any other challenges were caused by a lack of documentation/support for our resources, which were also quickly resolved.
Accomplishments that we're proud of
We’re proud to have a completely finished and slightly polished version of FAMLECS after about 25 hours of coding. This comes after 8 hours of coding an entirely different project, for which we realized that we would not have enough time to train a deep learning model. This resulted in scrapping our original code and deciding on the initial idea for FAMLECS after a few more hours of deliberating.
What's next for FAMLECS
We’re excited to see how FAMLECS can impact our community, helping at events where a check-in is necessary. Freeing up hands from multiple staff members who can shift their focus towards other aspects of said event is our ultimate goal with this software. Many improvements can be made on FAMLECS, including an even better calibration of its face and ID card reading threshold. We could speed up the process even more by working to eliminate the processing time between each of its call and response questions.