Note to Judges: This project was also submitted to ExploreHacks
⭐ Inspiration ⭐
As we all know, mask wearing has been a crucial part in fighting the spread of COVID-19 in public areas since the beginning of the pandemic. To ensure that this is upkeeped, most stores have employed staff to watch for people incorrectly wearing masks. This is why we developed Mask-Pass, an AI-powered door system to ensure everyone entering has their masks properly fitted.
We determined that a properly trained AI would be completely capable of replacing such a task in an efficient, and practical manner. Due to the nature of this product, a simple 3 second check would completely suffice in making sure that the patrons would be COVID safe.
The end result is a system that recognizes a customer’s mask status (no mask, mask on, & states in-between). Expanding on this, we then realized that we could create an entrance that would act as an automatic reminder for people to wear their masks, or in our case, grant entry to only those that are COVID safe.
⭐ What it does ⭐
Our project uses an AI that was trained through thousands of images of people wearing masks, and through this, it was able to accurately differentiate between those who were properly wearing masks and those who weren’t.
A live video stream is transferred in real time from a remote kiosk to the AI server, which then processes and categorizes the input. Depending on whether the mask is properly worn, a command is sent back to the kiosk / control server to allow the door to be opened.
After this differentiation, the output is then transmitted through microprocessors to a VEX system that would then open a door depending on whether the person wore a mask or not. The door is a simulation for the doors of a shopping mall, or any similar public entrances.
In the end, our system allows conditional entering based on whether the patreon is fully covid safe, without the intervention of an employee. This project can have numerous modern day and urban applications, and we believe that a more developed version of our product would provide great service to public areas around the world.
⭐ How we built it ⭐
AI Server Github
For our AI pipeline, we used a pre-trained model from a dataset consisting of tens of thousands of images using Tensorflow and Keras. The model was built off the MobileNet architecture, which allows us to have high speed real-time analysis of the incoming video. These images varied in content and allowed the AI to recognize different states of mask wearing in different sets of backgrounds.
Control Server / Live Kiosk
Control Server Github
Live Kiosk Github
The control server and live kiosk is built on top of a Flask backend, and serves the purpose of controlling, managing, and integrating the AI server, Camera server, and Audino Control Bridge. The live kiosk serves as the main interaction point between the patreon and our system, sending live video feeds to the AI processing pipeline and returning the detection back to the end user. With the control platform, the AI server can be remotely controlled, allowing for kiosks to be deployed anywhere and on any platform (Linux, Windows, Raspberry Pi, etc.)
The video stream between the AI server and the Control Server / Live Kiosk is implemented using two bi-directional MJPG (Motion JPEG) video servers. The video is then compressed and sent over the internet for the client and the AI server.
Arduino Control Bridge
Arduino Transmitter Code
Arduino Receiver Code
Once the desktop app detects a person, properly wearing a mask, trying to enter the door, it sends the connected Arduino Uno a serial message over a usb connection. This message holds a 1, which corresponds to the “close door” command, or a 0, which corresponds to the “open door” command. From there, the arduino uno uses an NRF24L01 radio transceiver module to transmit a radio message containing the same 1 or 0. On the door mechanism, an Arduino Nano, with it’s own NRF24L01 module, receives this radio message. The Arduino Nano uses a digital out pin to control an NPN transistor switch. You can view the schematics above. This NPN transistor switch is required as there is no direct way to interface the Arduino microcontrollers with the Vex V5 brain (motor controller). However, the Vex V5 brain has the option to attach a limit switch using a 3 pin cable. When the switch is pressed, current is able to flow between the top and bottom wires of the 3 pin cable. So instead, we connect 3 jumper wires to the Vex V5 brain, and we connect the top and bottom wires to the side contacts of the NPN transistor. The result is that the Arduino Nano controls the current flow between these two wires, and the Vex V5 brain registers it as a switch being pressed and released.
From there, once the Vex V5 brain senses that the switch has been pressed, it opens the door by driving the attached motor, and if it senses the switch is no longer pressed, the brain closes the door.
A door prototype was built through the use of the latest robotics VEX parts in conjunction with wireless Arduino technology in order to control the opening of the door. This is used with a custom written control panel to toggle the various parts of the system, as well as a video stream, which confirms to the user (person in the camera) whether their mask is worn correctly or not.
⭐ How it comes together ⭐
In order to actually activate the door, a live video feed was connected to the AI through a custom built API that allowed a camera to continually analyze frames in order to determine if a person, and their respective masks, were worn properly. If said person is wearing their mask properly, the device responds by opening the door and granting entry.
⭐ Challenges we ran into ⭐
The optimization of the video over the internet did prove to be a problem, as the quality of the stream, as well as the latency were major roadblocks that had to be overcome in order to create a solid product. Establishing a connection between the VEX parts, the software, and the Arduinos was also a daunting task, as all of these components had a fair mix of software and hardware issues, all of which had to work in tandem together to work efficiently. The connection of the video stream (OpenCV) to a PyQt5 application was also quite tough, simply due to the depth of the problem and how difficult it was to learn the process.
A challenge we faced was the optimization of the video stream over the internet due to issues such as stream quality and latency. These were major roadblocks that had to be overcome in order to create a solid and good-quality product.
Another challenge was establishing a connection between the hardware (VEX, Arduinos) and the software - a daunting task, as each component had a fair mix of both hardware and software issues to resolve. All components of our product had to work in tandem properly to efficiently function.
Application-Video Stream Connection
Finally, connecting the video stream to the PyQt5 application through OpenCV was also quite tough, simply due to the depth of the problem and how difficult it was to learn the process.
⭐ Accomplishments that we're proud of ⭐
Our project was able to be executed on low powered computers, due to the fact that we utilized an AI server on the cloud. This means that if this product were ever to be applied in the real world, it would be able to be executed on any device present in the area, without having the need to purchase or upgrade existing cameras and instruments. Our mock-up sliding door was quite a feat, as it was built in well under an afternoon out of common household parts, along with a combination of Arduino software & VEX hardware.
Our project, despite involving AI recognition, was able to be executed on relatively low-powered computers, due to the AI being hosted on a cloud server. This means that should this product ever be applied in the real world, it would be able to be implemented anywhere there is a device present, without the need to purchase or upgrade existing cameras, instruments, or devices.
A Sliding Door Recreation
Our mock-up sliding door was also quite a feat, as it was built in a single afternoon out of common household parts and materials, along with a combination of Arduino software & VEX hardware. This accomplishment allows us to have a live demo of our project.
⭐ What we learned ⭐
We learned that the applications of AI in real-world situations were quite practical, and were capable of being very efficient even in an inexpensive infrastructure.. Throughout the hackathon, we also learned that combining software and hardware and making them work hand in hand was much harder than we first expected it to be. We learned how to create and train artificial intelligence through the use of large datasets, mainly through the use of Tensorflow. Finally, we grasped how to develop desktop applications though the use of PyQt5 and QML
⭐ What's next for Mask-Pass ⭐
In all, Mask-Pass has a lot of future potential for what it can do. One thing we could’ve done if we had more time was to use a better dataset for mask recognition, as well as using a more fully-fledged AI system. On a larger scale, one addition we wanted to add was to check for vaccine status on top of mask usage. Finally, moving on, we believe that we can also implement numerous other checks to allow this system to essentially become an anti-COVID multi tool. Extra checks may include temperature sensors, symptom confirmations, and even possible biometrics or password systems for security.