The inspiration for this project was from our personal experiences living with bad roommates, who overstuffed the fridge with a lot of food, that used to be expired or took our food. We wanted to tackle this problem and make a solution to manage the inventory of the fridge or a shared closet.
What it does
The solution uses cameras to monitor, who is placing and removing items to/from the Fridge and what they are placing/removing. It achieves this using computer vision, technology to recognize the user from the registered set of users for a certain device.
How we built it
The solution was primarily split into two parts, the machine learning side and the user interface. The machine learning side was completely built in python, from client to server, using opencv,flask, tensorflow, and using the pre trained model inception v3. The half the project also controlled the devices cameras to detect movement and detect objects to be recognized. The user interface side of the project was built on react, using bootstrap and semantic ui. The interface allowed the users to checkout the inventory of their devices, and what items each user had.
The two sides were bridged using Node.js and mongodb, which acted as the central data storage and communication point from the machine learning and the user interface.
Challenges we ran into
The computer vision side of the project was extremely challenging in the limited time, especially detecting objects in the frame without using Deep Learning to keep the client processing load at a minimum, Furthermore, the data-sets had to curated and specially customized to fit this challenge.was
In terms of the back-end, it was difficult to connect the node server to the python machine learning scripts, so we went with a micro service solution, where the machine learning was served on two flask endpoints to the node server which handled all the further processing.
Finally, in terms of the front-end, there were challenges in terms of changing state of semantic-ui components, which finally required us to switch to bootstrap in the middle of the hack. The front-end also posed a greater time challenge, as the primary front-end developer had to catch a flight the night before the hack ended, and therefore was limited in the time she had to develop the user interface
Accomplishments that we're proud of
The solution overall worked, in the sense that there was a minimal viable product, where all the complex moving pieces came together to actually recognize an object and the person that added the object.
The computer vision aspect of the project was quite successful for the short time, we had to develop the solution. Especially the object recognition, which had a much higher confidence than was expected for such a short hack.
What we learned
There were a lot of takeaways from this hack, especially in terms of devops, which posted a major challenge. It made us think a lot more about how difficult deploying IoT Solutions are, and the devop complexity they create. Furthermore, the project had a lot of learning to be had from its ambitious scope, which challenged us to use a lot of new technologies which we weren't extremely comfortable with prior to the hack
What's next for Koelkast
We hope to keep training the algorithms to get them to a higher confidence level, one that would be more production ready. Furthermore, we want to take a look at perfboards, which are custom made to our specific requirements, These boards would be cheaper to product and deploy, which would be needed for the project to be take to production