Inspiration

After seeing a sign in Klaus mentioning a lengthy process for retrieving lost items, we realized that Georgia Tech has a chaotic, decentralized, system of retrieving and reporting lost items. We ourselves have seen clearly lost items in the CULC frequently, but have not reported them due to the lack of clarity on the item reporting process. We came up with SnapFind as a remedy to this issue to hopefully streamline the lost and found process at Georgia Tech.

What it does

SnapFind is an application that allows users to report lost items, adding the item to a database. This database can then be searched by people who have lost their items, allowing them to track down their item. Under the hood, convolutional neural network (CNN) techniques are used to classify the user images into object types, simplifying the database search for users since the items are tagged with their object type. For example, a picture of a laptop submitted by a user would be tagged as a laptop by the CNN, which can be used as a filter in the database. The model was trained on a large dataset of annotated images.

How we built it

We trained an AI from the ground up using available datasets online, then connected it to a front-end React app and the database Supabase that helps provide a smooth user experience.

Challenges we ran into

Finding the datasets for the object dataset was tricky since no single dataset had images for all the objects we were looking for. We have to process and combine numerous datasets to come up with an overall dataframe that our CNN was able to train on. The API set up was also tricky; we had to put our large CNN into it while keeping it memory-efficient to avoid overloading the API. Figuring out how we wanted to design our UI was also a big hurdle. It took us many tries to get the format of the website to look exactly like how we wanted it to be.

Accomplishments that we're proud of

One of the proudest pieces of the project was the model accuracy. We reached 85% accuracy after only a few hours of training. Our UI/UX is modern and neat, making the experience both intuitive and visually appealing.

What we learned

By developing SnapFind, we gained a deeper understanding of computer vision techniques to orchestrate object classification. We developed our skills in frontend and backend development as we utilized Stack such as React and Typescript to construct the website to display the lost item database. Additionally, we utilized technical tools such as Python, FastAPI, PyTorch, and TensorFlow to develop the backend to sort, train, and test an object dataset on a convolutional neural network (CNN) model. Through experimentation of several API hosting services, we gained insight on how to connect backend with frontend by utilizing Render to host an API to send the object database to the website so that SnapFind could classify images inputted and store them on a website for users to find and return lost items. One point of emphasis we learned throughout HackGT is that It’s important to have constant communication and collaboration between the people working between frontend and backend, as adapting code and testing becomes difficult later on. Beyond a technical scope, each team member significantly improved their collaboration, time management, and communication skills. We delegated certain tasks to team members who were more proficient in certain skills, making the goal of completing our final project and working together comfortably a seamless process.

What's next for SnapFind

We planned a few features for the future of SnapFind. One of which is to add a leaderboard system that incentivizes users to interact and help other users. We also plan to add more complex searching functions, such as sorting the available entries by similarity to an original image of an object. Mobile compatibility is also planned for the future in order to reach even more users and maximize accessibility.

Built With

Share this project:

Updates