Inspiration

Whether it’s the jungles of the Amazon, or the crevices of Craigslist, we are all experts of the e-commerce ecosystem. With Chrome extensions like Honey, and more information just a tab away, evaluating what to buy and for what price is made simple online.

When it comes to physical retail shopping, however, how can one be sure that they’re getting the best deal? In a physical store, all you have is what’s in front of you, and no one wants to open a multitude of mobile tabs.

With Choos, all you need to do is snap a photo of the product in front of you and you are automatically presented with competitive prices and style alternatives. Choos ensures that you always know that you are walking away with the best deal.

What it Does

For the user, it operates in three simple steps:

  1. Capture: Take a picture of the desired item using the mobile application.
  2. Browse: Browse offerings and alternatives from other sellers.
  3. Benefit: Make a purchase and leave satisfied.

Behind the scenes, Choos:

  1. Takes the image and sends it to Google’s Cloud Vision API
  2. Google’s Cloud Vision API analyzes the image and generates a complex list of keywords describing that image
  3. Those keywords are filtered for accuracy
  4. That keyword list is passed into the Serp API to retrieve Google Shopping Results
  5. The information from the Google Shopping Results further parsed to return the best results for the user

How We Built It

We built Choos with:
• Node.js/Express as the backbone
• Google’s Cloud Vision API for image recognition and analysis
• Serp API for Google Shopping Results
• Sketch and the Adobe Creative Suite for user interface/visual design
• React Native for front-end styling and application design

Challenges We Ran Into

Technical

• Initially, we wanted to have our app populate with product results from Amazon, but in order to access the specific API, we would have needed an MWS account, and the verification for that would take longer than the time we had for the hackathon.

• It was difficult to get the images to display correctly on the front end because the sources we were pulling the images from had inconsistent filetypes.

• Passing results from API to API meant we had to make a lot of asynchronous calls, so it was difficult to keep track of how everything was running.

• When troubleshooting, always make sure the server is running first.

Non-Technical

• Kyle fell out of his chair one time.

Accomplishments that We're Proud Of & What We Learned

We are most proud of how much we learned in such a short period of time. The veteran member of our team exposed the rest of us to the fundamentals of React Native and Node+Express which built the bulk of our product.

Additionally, we learned how powerful Google’s Cloud Vision API is. We were able to process our photos with ease using the API, which constantly returned extensive, accurate data. The API was intuitive and gave us robust access to complex machine learning tools.

What's Next for Choos

We’re excited about the future for Choos. Here are some of the features we are most looking forward to building:
• Amazon-result support (see above)
• Optimization of capture-to-result time
• Augmented-reality result overlays in real time

That being said, we are also open to any suggestions — so please reach out!

Share this project:
×

Updates