Over 285 million people in the world are visually impaired, 246 million of them having low vision. We know from experience that glasses are a pain and using computer features to constantly zoom small text is painful as well. In addition to the visual aspect, it’s also a hassle to have to hold the option and control keys to activate this feature. It’s difficult for elderly folks to hold on to these keys for an extended amount of time while also moving the mouse. Our product diminishes the need for these commands and activates the zoom feature by simply hovering over an icon, “zoom.”
What it does
GLASSis takes advantage of a computer feature which magnifies areas of the screen based on your mouse position. Our code determines where your eyes are looking (using the device’s camera), then moves the magnifying glass accordingly. Upon starting up this program, a quick calibration will be necessary to ensure best possible performance. In addition, GLASSis self-activates when the user hovers over an icon that displays “zoom.”
How we built it
There are three main parts to our program. First, the program takes a picture using the computer’s camera, finds all possible eyes in the image using a classifier algorithm, then refines this list to get rid of all garbage (invalid) eyes. These garbage eyes could be doorknobs, vents, clothing, or anything in the background that our classifying algorithm sees as an eye. We choose an eye and save it as a .png image. Next, the program takes this .png image and uses circle detection to find the center of the pupil. We now have an important data point: the center of your pupil relative to the .png image. From here, we track movement in displacement from the center and correct it through mouse movements. This is the third and final step in our process.
Challenges we ran into
There were a few major hurdles we had to overcome to get GLASSis to work. First, we had to get an eye detection classifier to work using the OpenCV library in Java on Mac OS. This required the installation of the OpenCV library, something that took a lot of our time. In addition, we had quite a few problems with the mouse mover command. Java has an inbuilt Robot library that allows for a program to control the mouse, but it requires the owner of the computer to allow Eclipse to control your computer. It took awhile for us to figure out that we had to go to system preferences and enable this feature.
Accomplishments that we're proud of
There were many milestones which we were proud of achieving. One of our first major steps was getting eye recognition to work consistently. However, the program would pick up on invalid (garbage) eyes that were actually random objects in the background. We had to create an algorithm to convert x/y position into one value. This value was used to find the two eyes that were nearest to each other. These two eyes were almost always the user’s actual eyes.
What we learned
We learned how to setup OpenCV on your computer. We also learned the 100 hundreds of methods that were in OpenCV and learned how to use Matrices to find the “eye.” Using these learned skills it will be easier for us to install other libraries into eclipse. The last thing we learned was patience. It took hours to setup OpenCV but we still didn’t give up.
What's next for GLASSis
In the near future, we are thinking to advance GLASSis so you can do everyday computer functions with your eyes. A “left” wink would correspond to “left click,” a “right” wink would correspond to a “right click.” The mouse would follow your eyes at all times. If you say “zoom” at any time the mode would change into “reading mode.” In “reading mode” wherever your eyes are is where the magnifying glass will be on the screen.