Team Captain - Pranav Teegavarapu
- Pranav Teegavarapu
- Benjamin Smith
Abstract of the project
MyMask is a web application where a user can upload a picture of their face and they are returned with a mask frame that is personalized for them. We chose to focus on mask frames as they uniquely solve the issues that people are facing with conventional facemasks: they make it a more snug fit, make the mask more durable, and further reduce the risk of exposure to COVID-19. In order for a mask frame to work, it must be a very good fit to one's face; otherwise, it'll do more harm than good (It'll make one extremely uncomfortable). In order to get an extremely high accuracy, we applied state-of-the-art computer vision models to perform facial landmarking on one's face, allowing us to match them with a mask frame that fits.
The hackathon category
Day-to-day PPE: We modified the typical design of a facem ask to maximize comfort and wearability(by making it a more snug fit), along with fixing current issues in supplying (and delivering) 3D printed mask frames to those who need it. Our product is over 3 times less expensive to produce when compared to typical 3D printed face masks.
Tools used to build the project
The frontend of our web app is built in HTML/CSS/JS, and we used mobirise to give us a basic template for how we wanted it to look like. Our backend (3D model selection and facial landmarking) was done in Python, using OpenCV and DLib for Computer Vision. We used also used the Flask framework to create a REST API, which connected our frontend and backend.
Earlier this week, I saw a news article about the concept of 3D printed mask frames . I was really intrigued by this potential solution to the global PPE shortage, and I wanted to use this opportunity to explore their potential in "upgrading PPE", and I was able to do so!!
Challenges we ran into:
- We initially wanted to try using Volumetric Regression Networks (VRNs) to reconstruct a 3D model of one's face, and to programmatically create a mask frame for that 3D model. We were able to build off of Microsoft Research's implementation of a VRN, and we ended up successfully creating an API which converted an image into a 3D model. However, we were unable to process this in our code, as the 3D model was actually a mesh of points that had no width (like a sheet of paper folded in the shape of a face; while it looked like a 3D model, we couldn't programmatically create a mask frame). We spent over a day dealing with this, and ended up having to switch to facial landmarking, due to the limit of the technologies we used.
- Our API initially had a latency of close to minute, due to the amount of time spent on processing the image. In the end, we were able to significantly cut this time, by resizing the image and converting it to grayscale.
What we're proud of
- We persevered through not being able to implement a VRN, and I'm really proud that we didn't quit after this, and that we made it through!!
- We were able to build a production ready web app during this weekend, and I'm really proud of what we made!!
- We're really proud of how we were able to optimize our code (when creating our API) to minimize latency, and I find it awesome how we were able to minimize it to under 10 seconds!!
What's Next for MyMask
We plan on deploying our website as soon as possible, and we hope to get feedback on our project from the community (hopefully through this hackathon). From there, we hope to reach out to online communities of 3D printing enthusiasts to try to get them to try implementing our idea.