Many of the members of this team have friends or family who are colorblind. We believe that this is a significant issue that is easily addressable with software solutions, increasing the degree of accessibility for visual interfaces to color blind people.

What it does

This application takes uses a live video feed and an image filter to actively adjust the color of someone's surroundings. We have made the application color blind type agnostic, addressing all three main types of color blindness. In addition, we have used Microsoft Azure Machine Learning Studio to build a machine learning model for color classification of the colors scanned by the user.

Challenges we ran into

Implementing the real-time aspect of this app was especially difficult because the compute power of the CPU on a normal smartphone is quite limited. However, we eventually used openGL to move the calculations to the GPU, achieving a crisper and higher-definition live video feed.

What we learned

Each of us learned about the difficulties of color blindness and the importance of designing accessible interfaces. But beyond that, we all took away different hardware and software techniques that will certainly aid us in future hacks!

Share this project: