Our product minimizes ergonomic strain by informing the user how often they are sitting too close or too far from the screen. It allows the user to see in real time how well they are maintaining the distance from their computer screen. Our inspiration for this project was that we wanted to create something that benefited user’s physical health, as students are now spending over 8 hours on their computers. The majority of that time is spent in the wrong position, either with bad posture or with their face too close to the screen. Computer Ergonomics Live Facial Scanner Demo (VH Hackathon)Computer Ergonomics Live Facial We built this product using mainly python with some HTML code. We imported python packages such as cv2, flask, and face-recognition. We called the majority of the functions in the command prompt, and hosted the HTML file in a localhost server. The main challenge that we ran into was trying to implement user input in order to get a more accurate depiction of the user's face size. Another struggle we had was implementing the Python code into the HTML document. At first we tried to use Brython, a program that attempts to replace JavaScript with Python as the main web development language. We had a lot of struggle implementing the Brython script with our previous Python script, so ultimately we had to drop the idea. Unfortunately, we lost the ability to change the paramaters of what is too far and what is too close based on the size of the monitor and had to settle with a constant average size. In the future, we hope to implement user input to get the most accurate reading. Some accomplishments we are proud of was utilizing the OpenCV library and importing various packages successfully. Although we faced struggles with numerous errors and constant crashes, the code ultimately displayed the live camera on the website successfully. We are also proud of the organization of our code, in which frequent comments and folder organizing allowed us to efficiently program our projects and gain the final solution. In the function get_Frame(), we were struggling to find the area of the square around the face and convert it to real measurements that can actually display, but in the end, by persevering and finding the right measurements and understanding the OpenCV library, we were successful. In the future, we would like to implement all the code on a single website so the user is not jumping around. This will improve the usability of the code and allow more people to gain the knowledge of how often they are out of position in front of a laptop. We also hope to include user input to vary the the parameters of what is too close and what is too far away based on monitor size.
Log in or sign up for Devpost to join the conversation.