Inspiration
Three hundred million people in this world are colorblind. Surely there are plenty of tools to help out all these people with colors right? After hearing about the color challenge, we quickly checked a variety of app stores for colorblind software, just out of curiosity. We were shocked to discover such a low number of apps designed to help colorblind people, when there are so many colorblind people in the world. So we set out on a mission to fix this problem and to win the challenge at the same time.
What it does
Colorblind people usually struggle to tell the difference between light and dark colors. There's no issue when the colors are very different shades, so the main issue are the light and dark colors. For example, it's tough to tell the difference between light blue and light yellow.
However, with our app, we detect all sorts of light and dark colors in a photo to tell a colorblind person what the colors of objects are without any assistance. Even though other types of color-detection apps exist, they usually can't work with light and dark photos. Our app can do all these tasks on its own, and can identify the colors of all sorts of objects with no assistance. The app is also nice and easy to use, and looks great at the same time.
How we built it
The front-end was created with React Native, and it's easy on the eyes and even easier to use. The app starts with a welcome screen that shows the logo with a button that leads to the camera. On the camera page, the user has the option to turn on flash, flip the camera from back to front or vice versa, or to take a picture. After the user takes a picture, they can choose to retake the picture, or to analyze it. Once analyzed, the app moves to the result screen where it tells the user the exact color in the picture, and even the hex value for the color for the best accuracy.
The back-end is visual recognition software that is built using Pyflask, a Python framework. It uses Google Cloud's Vision API to detect the image's properties, more specifically the RGB values contained in the pixels of the images. It compares to total amount of pixels to the pixels of a specific color and then returns the total percentage of that color. Furthermore, by creating additional helper functions we were able to convert the RGB values into a hex color code for any users that may want to find the exact color shade. Even better, our helper functions also let us convert the hex code into an actual color word incase users aren't familiar with how hex codes work. In the event where the detected color has not been assigned a name, we are able to detect the color closest to the shown shade.
Challenges we ran into
Since the start of the hackathon, we have encountered numerous obstacles such as installation issues, long loading times, merge conflicts, and a variety of syntax and programming errors. Initially, we had no prior knowledge of the technologies we used. The structure of React Native posed a significant challenge in creating a functional app, but we ultimately managed to overcome it. Integrating the Google cloud API with React Native was also difficult, as the two had to be combined, but that came with a whole group of other issues. Furthermore, the hackathon became even more challenging when two of our team members had to leave to go take their midterms, diverting their attention and work away from the project.
Accomplishments that we're proud of
We take great pride in our ability to learn new knowledge. When we started Hack the Hill, we had little to no familiarity with app development, scripting, and visual recognition software. However, within the given timeframe, we were able to learn all the necessary skills and gain a thorough understanding of developing both the front-end and back-end of an app, as well as integrating the two. Our achievement of developing everything from scratch in just two days fills us with immense pride. Needless to say, we owe our success to the valuable information and workshops provided by Hack the Hill, and we express our heartfelt gratitude to them.
What we learned
None of us had any experience using React Native or developing mobile apps. Understanding the connections between React Native and the Google Cloud Vision API was key to making our app work. With the help of YouTube videos and online documentation, we managed to piece it all together to form Sechroma. Thankfully, most of us already had a decent understanding of Javascript, so picking up React Native was not as difficult thanks to the similarities in syntax. We also experienced the tight scheduling and intense pacing of a hackathon which helped us achieve our goal in time. This taught us how to control our time and work pace to finish before the deadline.
What's next for Sechroma
Sechroma has lots of potential for future endeavors. We aim to implement more diverse color detection, and some activities for the user to have fun with. If the app can detect more color types, it'll bring joy to more colorblind users to help them recognize the colorful world around them. We'd also like to add some sort of activity to try to help people remember what each color looks like in their own visual world. Something like a short memory game to match different shades of colors with each other with a short time frame would be a great activity during short breaks.
Built With
- flask
- google-cloud
- google-cloud-vision-api
- node.js
- react-native


Log in or sign up for Devpost to join the conversation.