Inspiration

Essential workers are needed to fulfill tasks such as running restaurants, grocery shops, and travel services such as airports and train stations. Some of the tasks that these workers do include manual screening of customers entering trains and airports and checking if they are properly wearing masks. However, there have been frequent protest on the safety of these workers, with them being exposed to COVID-19 for prolonged periods and even potentially being harassed by those unsupportive of wearing masks. Hence, we wanted to find a solution that would prevent as many workers as possible from being exposed to danger. Additionally, we wanted to accomplish this goal while being environmentally-friendly in both our final design and process.

What it does

This project is meant to provide an autonomous alternative to the manual inspection of masks by using computer technology to detect whether a user is wearing a mask properly, improperly, or not at all. To accomplish this, a camera records the user's face, and a trained machine learning algorithm determines whether the user is wearing a mask or not. To conserve energy and help the environment, an infrared sensor is used to detect nearby users, and shuts off the program and other hardware if no one is nearby. Depending on the result, a green LED light shines if the user is wearing a mask correctly while a red LED light shines and a buzzer sounds if it is not worn correctly. Additionally, if the user is not wearing a mask, the mask dispenser automatically activates to dispense a mask to the user's hands.

How we built it

This project can de divided into two phases: the machine learning part and the physical hardware part. For the machine learning, we created a YOLOv5 algorithm with PyTorch to detect whether users are wearing a mask or not. To train the algorithm, a database of over 3000 pictures was used as the training data. Then, we used the computer camera to run the algorithm and categorize the resulting video feed into three categories with 0 to 100% confidence. The physical hardware part consists of the infrared sensor prefacing the ML algorithm and the sensors and motors that act after obtaining the ML result. Both the sensors and motors were connected to a Raspberry Pi Pico microcontroller and controlled remotely through the computer. To control the sensors, MicroPython (RP2040) and Python were used to read the signal inputs, relay the signals between the Raspberry Pi and the computer, and finally perform sensor and motor outputs upon receiving results from the ML code. 3D modelled hardware was used alongside re-purposed recyclables to build the outer casings of our design.

Challenges we ran into

The main challenge that the team ran into was to find a reliable method to relay signals between the Raspberry Pi Pico and the computer running the ML program. Originally, we thought that it would be possible to transfer information between the two systems through intermediary text files, but it turned out that the Pico was unable to manipulate files outside of its directory. Additionally, our subsequent idea of importing the Pico .py file into the computer failed as well. Thus, we had to implement USB serial connections to remotely modify the Pico script from within the computer.

Additionally, the wiring of the hardware components also proved to be a challenge, since caution must be exercised to prevent the project model from overheating. In many cases, this meant to use resistors when wiring the sensors and motor together with the breadboard. In essence, we had to be careful when testing our module and pay attention to any functional abnormalities and temperatures (which did happen once or twice!)

Accomplishments that we're proud of

For many of us, we have only had experience in coding either hardware or software separately, either in classes or in other activities. Thus, the integration of the Pico Pi with the machine learning software proved to be a veritable challenge for us, since none of us were comfortable with it. With the help of mentors, we were proud of how we managed to combine our hardware and software skills together to form a coherent product with a tangible purpose. We were even more impressed of how this process was all learned and done in a short span of 24 hours.

What we learned

From this project, we primarily learned how to integrate complex software such as machine learning and hardware together as a connected device. Since our team was new to these types of hackathons incorporating software and hardware together, building the project also proved to be a learning experience for us as a glimpse of how disciplines combining the two, such as robotics, function in real-life. Additionally, we also learned how to apply what we learned in class to real-life applications, since a good amount of information used in this project was from taught material, and it was satisfying to be able to visualize the importance of these concepts.

What's next for AutoMask

Ideally, we would be able to introduce our physical prototype into the real world to realize our initial ambitions for this device. To successfully do so, we must first refine our algorithm bounds so that false positives and especially true negatives are minimized. Hence, a local application of our device would be our first move to obtain preliminary field results and to expand the training set as well for future calibrations. For this purpose, we could use our device for a small train station or a bus stop to test our device in a controlled manner. Currently, AutoMask's low-fidelity prototype is only suited for a very specific type of mask dispenser. Our future goal is to make our model flexible to fit a variety of dispensers in a variety of situations. Thus, we must also refine our physical hardware to be industrially acceptable and mass producible to cover the large amount of applications this device potentially has. We want to accomplish this while maintaining our ecologically-friendly approach by continuing to use recycled and recyclable components.

Share this project:

Updates