Inspiration
Our inspiration came from understanding the lack of information present by the packaging of beauty products when compared to its online presence. Often times, these products would have large amounts of text, subsequently decreasing the text size and making it ineligible especially for individuals with vision impairments. As a result, the shopping experience becomes troublesome for these individuals.
What it does
Our product has integrated search by the means of scanning and voice input, allowing users to find the desired product quickly and efficiently. We have also integrated a text-to-speech function for users viewing the product details, and have enabled enlarging the font size. After better understanding the product, if the user is content, they may choose to apply it on themselves. Back on our camera screen, to help users who are colorblind, we have implemented a feature that would help users with the three most common colorblind types: Protanopia, Tritanopia, and Deuteranopia better recognize and differentiate colors on a product.
How we built it
We built our front-end app with the Expo framework which is built around React Native, and is written in Typescript. Our back-end is built with Python, using Flask as its web framework. We also utilized Google’s OCR for our product recognition technology, OpenAI ChatGPT3 library for our summarizer AI, and AWS Elastic Search for our database. Our backend is hosted on an EC2 instance in AWS.
Challenges we ran into
At first, we planned to do a live color correction on the Camera preview. But due to the time it takes for the algorithm, this was not possible.
Accomplishments that we're proud of
We successfully developed an end-to-end pipeline for product recognition using product images and user voice input. By decomposing the process into text recognition (from either photos or user speech) and elastic search components, we have established an efficient and effective system for identifying and retrieving pertinent product information.
What we learned
- We discovered that thorough research is critical to understand the users' needs. This helped guide us to create a solution which focused on the customers satisfaction.
- We honed our skills in building comprehensive machine learning pipelines, which played a significant role in the success of our product recognition solution.
- We developed the ability to connect a variety of APIs and services on the backend side to ensure smooth interactions between diverse components.
- We gained expertise working with camera and voice recording features on the frontend side.
- We gained more insight and knowledge as to how to design suitable features and a user interface for users with visual impairments.
What's next for ASKCET Product Recognition
- We would like to add beauty preferences into the application. The beauty preferences would be accumulated through a voice activated survey which would be quick and concise. Products would then be color coded on the screen according to the users beauty preference.
- We would like to integrate tutorials from social media that relates to the product the user is viewing, which would then be an interactive tutorial where the user's face will be scanned. An automated voice would guide the user according to the tutorial.
- We would like to create a community on product recommendations centered on product accessibility. This feature could also gather feedback on future product's RND to ensure a creation of accessible beauty products. This information would also be accumulated to form accessibility ratings.
Built With
- adobe-creative-suite
- amazon-ec2
- amazon-web-services
- elasticsearch
- expo.io
- figma
- flask
- ocr
- openai
- python
- react-native
- speech-to-text
- typescript
Log in or sign up for Devpost to join the conversation.