We thought the biggest challenge facing our society was Covid Transmission Rates, we wanted to allow for the general population to calculate the risk levels of the people around them. According to the WHO, if someone is asymptomatic they could have a rate of transmission as low as 18.

Because of this, we wanted to build a framework compatible for both computers and phones to analyze some of the symptoms of the people around them.

What it does

Our program takes a real time video, sends it to our Node.js backend where we have 3 Independent Sequential Deep Learning Algorithms to process and give a scoring. This scoring is then returned to the front end along with an S3 bucket of the image and the bounding box.

We determined that the most important indicators for Covid visually was a fever, runny nose and red eyes. So we built the scoring based around that.

How we built it

The glasses Camera is the input that we use in order to get the video feed. The front end is made using Static HTML pages in Node.js, we then using to transmit a sample of the videofeed to the backend where we run our Classification Algorithms.

These algorithms where built using Keras and Python 3.7 and where then ported to the Node JS backend using TensorFlow.js, the characteristics the models check for are:

       Eyes (EYES.h5): Compares the sample eyes to a database of both normal eyes and red eyes

       Face (FACES.h5): Compares the face to a database of both normal eyes and diseased faces

       Noses (NOSES.h5): Compares the nose to a database of both normal eyes and runny/red noses

These factors are then weighted (0.35 for eyes, 0.5 for face and 0.15 for nose) in order to get a score out of 1. We then show this using the health bars and show an explanation for the score.

Challenges we ran into

We found that trying to run the Deep Learning Algorithms that we trained was very difficult as we had to convert a Python framework to JavaScript. As well converting an image object from a canvas into a tensor object was very hard, made harder still by the fact that we had to transmit them over a websocket.

We also found that setting up the hardware was quite hard as we get the server to read the data very easiy

What's next for CovAlert

We want to improve the hardware so that it can work over networks. It would also be really interesting to have it hooked up to a local display to show the person the risks in real time of the people around them. As well we would like to add more classifications and enable people to add their own weights!

Built With

Share this project: