Inspiration

We've always been interested in keeping track of what we purchase and consume and current inventory apps just didn't cut it - they involved manual data entry for each item and were not user-friendly. We decided to build an inventory app that had visual & audio information as the primary means of entering data which would enable a user to shorten the time it takes to create and add to an inventory. We also wanted to make an inventory app that was simple to use and only required that the person point his phone at the object in order for him to add it to his inventory.

We wanted to create an inventory app that instead of hitting a server and taking up time in the process, used an in-house neural net to actively search for items to scan and add to your inventory for peace-of-mind when traveling or on-the-go

What it does

Using the camera, it scans an object in its field of view and follow two behaviors depending on whether the object has a barcode or not. If the object has a barcode, we use NCR's API to find more information about the object and auto-populate required fields. If the object doesn't have a barcode, we detect the object using MobileNet running on the phone - which allows us to enter information about complex unknown objects without need to use the internet for backing up into a secure location should your device be destroyed.

How we built it

We used iOS & Swift for the application, iOS libraries for the QRcode reader and the neural net, Tensorflow for the weights and AI

Challenges we ran into

Using iOS given that neither one of us had experience with iOS development. We probably wouldn't have finished without the help of the several mentors who were invaluable. We were also unfamiliar with finding and using a neural net that could work on the limited resources of an embedded system like the iPhone.

We were planning to create boundary boxes about objects detected by our phone but were unable to finish this feature as the neural net we used did not expose useful information regarding boundary boxes and we did not have sufficient time to use a different neural net.

Accomplishments that we're proud of

We managed to stick in a reasonably accurate convoluted neural net into an iPhone app over the course of a weekend. We also managed to pull off a rather tricky project for our first iPhone application and make it so that all data stays in-house.

What we learned

It's possible to take a fairly complicated design and make a functioning app, even without prior experience with the required tools. While both of us had experience in other areas, integrating machine learning and iOS design was definitely not one of them and this hackathon has been a great learning experience for the both of us.

What's next for Sortify

Improving the user interface, creating better models to track the information of our users. Having a better method of displaying which objects are currently detected as well as dealing with objects that have already been detected will make our app much better.

Built With

Share this project:
×

Updates