Inspiration

We believe that engineers have a responsibility to go beyond the minimum, meeting needs at every level of Maslow's hierarchy. Consequently, we sought to build a project which goes beyond basic assistive technology. While more and more ventures focus on small daily tasks, they often neglect a basic human need: personal expression. We also believed that given our team makeup, the project would significantly challenge all of our skillsets and present an opportunity to develop as individual and team-oriented engineers.

What it does

Using pupil tracking technology, we attempted to enable individuals to draw on a canvas using just their eyes. Closing one eye or the other corresponds to adjusting instrument pressure, while displacing one's pupils from center will move the instrument on the canvas plane.

This is the technical explanation, but above all else, this is a tool for people who have a disability which prevents them from expressing their creativity. While some existing tools allow people to create digital art, they don't have the same satisfying physical product. Not only can our users deliver a tangible piece of art to their friends and loved ones, they can manipulate the pressure of pen/brush strokes to add depth to their art.

Besides artistic expression, this is also a tool that encourages mental stimulation. Art has proven to be effective therapy, and the unorthodox method of using only gaze to create art requires additional focus that may prove beneficial.

How we built it

Creative vision was conceived in three main technical areas: the intelligent software system which processes webcam frames to compute information on pupil state, the hardware abstraction layer which translates that information into electrical signals to the motors, and the mechanical system which expresses the user's imagination.

The first stage uses opencv APIs to grab webcam frames. Two haar cascade filters are used to extract the face and eyes of the user, respectively. Then (Connor fill in here) is performed in order to extract the relative location of the pupils and whether they are open or closed.

The next stage of software filters each resultant computation packet to remove noise, applying a thresholding filter to eye state and a low pass filter to eye location with alpha tuned to adequately match the frame rate. Filtered data is sent to an embedded processor, the raspberry pi, over a flask server. Here, to determine the signals to apply to the motors, a linear system is solved with least squares regression, and the solution is passed to another abstraction layer that manages the PWM signal to the servos. If the desired action would take the arm out of its bounds, the linear system is inconsistent, and therefore no solution exists.

Finally, (Tobias and Talha talk here)

Challenges we ran into

Setting up facial recognition took lots of trial and error. We initially intended to use a library called pygaze, but the installation failed time and time again, so we chose to pivot. Eventually we settled on implementing our own classifiers with pre-built neural networks, which came with its own set of challenges like optimizing thresholds for fine motor controls.

Sending the eye-tracking data to the servos was also a challenge. The packets we intended to send through the raspberry pi were transmitted inconsistently for the longest time. All the debugging we did led us to realize the error was caused by poor threshold values and our system of flagging which eye was closed at which time.

Designing the arm also took considerable time and many iterations. We scrapped quite a few possible designs in favor of simpler ones. Even after settling on a design, we ran into issues with the 3D printers that interrupted our work flow and put us into a significant time crunch around 3am (the optimal time to be in a rush).

Accomplishments that we're proud of

We're most proud of the solutions to our biggest challenges. While facial recognition and pupil tracking took hours to set up and even more time to optimize, it was one of the biggest parts of this project and absolutely vital. Getting the system working and all the data successfully transferred between systems was a huge success. Plus, the fact that our build works over a wireless network rather than being hardwired makes it a far more elegant solution.

Using all the tools in our disposal to create the parts for our mechanical arm was also a huge accomplishment. While this was less technically challenging, it required a great deal of foresight to realize that we would be in a big time crunch, and once time got tight we were able to adjust and still build the necessary pieces in time.

What we learned

We learned about facial recognition and the various ways to achieve accurate recognition of faces and facial features. We also learned to create clean hardware and software interfaces, as our initial implementations led to bugs that we solved by cleaning up our code.

For some of our team this was their first time working on a project involving both hardware and software, and so we learned about how each of these components affects the project timeline.

Our exploration of mechanical arm designs taught us how 3D printed parts interact with servos and how you can make more intricate parts than expected. For example, we created gears and were shocked to find out they actually meshed well together.

What's next for Creative Vision

In the near future we intend to expand upon our proof of concept by exploring other opportunities for visual input and cleaning up our physical design. Given more time to design our hardware and software architecture, we could create a more finely controlled tool which has the potential for implementation in rehabilitation and therapy environments.

Share this project:
×

Updates