Inspiration
As university students, we are finally able to experience moving out and living on our own. Something we've realized is that shopping is an inescapable part of adult life. It can be tedious to figure out what to buy in the moment, and keeping track of items after you've decided and then searching for them throughout the store is no easy feat. Hence, we've decided to design an app that makes shopping cheap, convenient and smart.
What it does
Our app has two main features, an intelligent recommendation system that can use user voice/text search to recommend products as well as a system for product image recognition. The recommendation system allows users to quickly search up relevant items by saying what they want to buy (ex. “What should I buy for my daughter’s birthday party?”, or "what is on sale this week?"). The app will then list smart suggestions pertaining to what you inputted, that can be added to the shopping list. Users can also take pictures of products to figure out the price of a product, price match and check whether it is on sale. The shopping list can then be viewed and each product will list important information such as the aisle number so that shoppers can easily find them.
How we built it
The app was built using React Native for cross platform support on mobile and the backend was built using Flask with Heroku.
The product database was generated using Selenium to scrape every item in the Loblaws website catalog, categorized by department with other relevant information that employees can add on (aisle number specific to a store).
The smart speech to text recommendation system was built using AssemblyAI as well as Datamuse API to first convert the speech to text, then get the relevant words using AssemblyAI’s keywords feature. The words were then fed into the Datamuse api to get words that are associated and then give them a rank which were then used to search the product database. This allows users to speak in both a direct and a casual way, with our system detecting the context of each and recommending the best products.
The image recognition was done using a mix of google vision api as well as a custom trained vision api product search model. This model was automatically generated using selenium by connecting the loblaws listings with google images and uploading into specific buckets on google storage. By comparing these two models, we are able to narrow down the image context to either a specific product in store, or annotate its more general uses. This is then passed onto the logic used by the recommendation system to give context to the search, and finally onto our custom product mapping system developed through automated analysis of product descriptions.
Challenges we ran into
It was our first time working with React Native and training our own model. The model had very low confidence at the start and required a lot of tweaking before it was even slightly usable, and had to be used alongside the known Vision API. This was our first time using Heroku which provided easy CI/CD integration on GitHub, and we had to understand how to insert Vision API environment variables without committing them.
Accomplishments that we're proud of
We are proud of our user friendly design that is intuitive, especially as a team that has no designers. We were also able to successfully implement every feature that we planned, which we are very proud of.
What's next for ShopAdvisr
Working with a company directly and having access to their whole database would greatly improve the apps database without having to scrape the website, and allow more options for expansion.
Built With
- assemblyai
- automation
- datamuse
- flask
- google-vision
- javascript
- machine-learning
- python
- react-native
- selenium
Log in or sign up for Devpost to join the conversation.