Our descriptive model
Testing portion of our dashboard
1. Collecting Data for our model.
2. Training our model.
3. Testing our model. Who doesn't love a funny meme! Meme + AI = A beautiful world.
This man needs to share!
Confidently determining that the images on the left is indeed hand sanitizer!
Identifying toilet paper confidently with an elegant score of 90% not bad for a couple hours of training!
Toilet paper and Sanitizer are increasing in value now a days!
In order to test agains our model we need to drop in some images!
Testing our negative set used in training in order to create a contrast between the positives and false positives.
During this COVID-19 pandemic. Many have feared the lack of supply in toilet paper. Stores are running out fast and using a human to check inventory could be inefficient. Why not automate that process with forward thinking computer vision? Why not use object detection to justify how many items may be on the shelf.
What it does
Using Watson Studio and Watson's Visual Recognition our team was able to label, train, and test our model for detecting whether an item on a shelf would be identified as toilet paper, sanitizer, or a miscellaneous item. This solves the issue of identification of product for business to begin with. We are at the moment striving to complete the capturing of inventory as a whole. In other words, keep track of the stock. Identifying if the supply is running low and alerting the supply chain for more before a supply collapse occurs.
How I built it
We struggled tremendously initially due to our lack of experience using Machine Learning, let alone a machine learning API. We first created an IBM cloud account. Then we incorporated Watson Studio. Following this was adding the Watson Visual Recognition Service to our project. Once the workspace was set up, we created custom visual classifiers for our project. This involved: Gathering our images, cleaning our images, labeling our images, including negatives(which play a role in contrasting what we want over anything else), following that, we had the model trained. Once trained, you would be able to drag and drop any image relevant to our labelings and it would provide a confidence interval giving us the likely who or odds that this is indeed what we are viewing.
Challenges I ran into
Github:We ran into many challenges as a team. My team and I were very new to Github and although we know it is a key facet in programming, we set out to learn some of that prior to collaborating. Our issues arrived when we began dealing with merge conflicts.
Machine Learning/Object Detection: This was completely uncharted territory for all of us. We were so intimidated, we ended up going back and fourth with ideas(changed ideas at least 3 times). Our tenacity to learn and understand how machine learning works, even on a basic and abstracted level, overcame our actual intelligence in the topic.
Accomplishments that I'm proud of
We are proud of understanding the basics of Object Detection as a whole, Labeling Data, and Working in such a collaborative and positive way. Our team used Zoom which was such an important toll when it came time to come together. My favorite feature in zoom is the ability for another member of your conference to take over your computer if given access. This helped streamline any thing we needed to work on.
What I learned
Github->IBM Vision Detection->Zoom(remote use)
What's next for shelfAlert
Our grand vision is to provide this technology in hardware form and implement the device within stores world wide. As a result of our solution, we can reduce how long it takes for shipment of supplies, supply for the demand right away without any delay and ensure local families that at the end of the day, their is enough toilet paper to go around