Check out our business plan, slides, and code demo!
Inspiration
It’s no lie that day by day, coronavirus cases are surging across the globe, forcing more and more people to stay at home and stare at screens for hours on end. Whether it be students relying on their devices to complete homework assignments or adults in the tech industry who need a device to communicate with clientele, our current situation is forcing people to spend the majority of their day working on a device such as a laptop, computer, tablet, or smartphone.
With the sudden increase in screen time, our ocular health is at risk. Blinking is vital for cleaning the ocular surface of debris, but when using a computer, the blinking rate drops a whopping 60%. The only way we can truly protect our eyes from our harmful devices is by monitoring our eye health while we are looking at a screen.
What it does
enVision is a browser extension that utilizes real-time personalized data in order to evaluate a user’s eye health and provide customized suggestions on when to take breaks and rest your eyes as well as complete brief eye exercises. The extension connects to a digital retinal scanner that can be placed over a device’s front camera. Through constant scanning of the user’s blinking rate per minute as well as the analysis of change of movement among the pupil, the app draws conclusions about the user’s eye health and releases alerts accordingly. If the user does not meet the minimum threshold values for pupil movement and blinks per minute, enVision identifies user-specific patterns and creates a custom recommendation for taking screen breaks and performing eye exercises. This allows the user to take the necessary precautions in preserving their vision even with the drastic increase in their daily screen time.
How we built it
For the web and mobile applications, our team developed a prototype in Framer to allow users to create and access their accounts. Additionally, when users create their accounts, they will be prompted to provide some basic ocular history. On the web application, users will be able to install the browser extension and it will provide notifications whenever the user is not blinking or moving their eyes enough.
The retinal monitor was coded in Python and it detects the eyes’ blinking rate using the retinal camera scanner to detect the eye location. We used NumPy and OpenCV-2 to get the eye location depending on the shape and enable the retinal camera scanner. The retinal monitor was added to the web application which was coded using Python and the Flask API with embedded HTML.
For the physical retinal camera scanner, we designed it in AutoDesk Inventor. The final CAD model displays a 3D replica of the retinal scanner placed over a laptop webcam. The model as a whole represents a laptop screen while the smaller rectangle on the top middle represents the retinal camera scanner placed directly over the laptop webcam. This demonstrates the physical usage of the retinal scanner in correlation to the user’s device.
Challenges we ran into
We wanted to implement our retinal camera scanner into our submission, however, due to the lack of time given and the limited cv2 and NumPy python knowledge, we were only able to use some source code (credit: Shameem Hameed) to make a simple retinal monitor that just tracks blinking rate for a custom amount of time and not pupil movement. With a more advanced camera, we could further develop our application to look at pupil movements for all eye colors (it is currently hard to look at dark eyes without a proper camera), make an actual profit for selling the camera, and do many more tests to ensure it works smoothly.
Additionally, we would like to include machine learning to have more customized reports, but due to the lack of user testing and the short amount of time given, we were unable to incorporate it into our app. Another thing that we are looking into is neural networking so our data will not be reliant on threshold data, but instead realtime data from the x amount of users.
Accomplishments that we're proud of
As for the accomplishments we are proud of we were able to develop a prototype of the retinal scanner. We developed a CAD model to showcase the retinal scanner as well as demonstrate how the retinal scanner would be attached to the computer. Although we were on a time crunch and had bare knowledge in coding in Python, we're incredibly proud of the fact that we not only coded a proof-of-concept, but also created several detailed mock-ups for our browser extension and mobile app.
Another accomplishment was that we were able to contact and receive support from the judges as well as a few mentors who gave us excellent suggestions to make our device and browser extension more effective and helped us ensure that enVision would actually function.
One of our other major accomplishments was that we were able to get survey results from over 97 active screen users in the past couple of hours, and this survey was vital for our team to perform market validation and ensure that our problem was valid and widespread. We are also proud of the UI design that we made on Framer. We have never used it to make UI design layouts for websites and applications. We created the Framer design during the actual hackathon time constraint.
What we learned
From enVision, we learned and understood how to code the blinking rate feature in Python. Using Python we were able to code how the web application would monitor the blinking rate of the user. We also learned how to create effective yet simplistic UI designs. These UI designs for the web application and the mobile application helped us ideate and understand our concept. We also learned how to develop an intricate CAD model. We were able to develop a CAD model on Autodesk Inventor that showcased how the retinal scanner would look like.
What's next for enVision
The next steps for enVision include developing a companion app. We want to create an application as it will allow for easier convenience and accessibility by the user. Along with this, we want to create a mobile-friendly retinal scanner, and incorporate a more extensive eye exam that will ensure more accurate and efficient threshold values.
We also want to further develop our browser extension to track user posture and face touching.
Log in or sign up for Devpost to join the conversation.