Inspiration

Chris Davie was walking around Washington D.C. for the first time, and he had no idea what buildings, statues, or structures he was looking at while there. SInce he wanted a easy way to look up information about a building or statue that's right in front of you, he thought of creating InSight.

What it does

InSight uses your phones camera to take a picture of anything you're looking at, then sends that picture along with your current location to our server. After some processing, our application will tell you what it is you're probably looking at.

How We built it

We developed the front of the application as an Android application. The application uses your camera to take a picture of what's in front of you, then submits that picture along with your location to our server. Our server then uses Foursquare to collect names and pictures of locations close to where you are. We pass those pictures and names to a Clarifai service we developed, which trains a model on those pictures with the names of the locations as concepts. From there we use that model to predict what the image submitted by the user is, and then return those probabilities to the user. We give the user the ability to choose which image it is that's in front of them, then use that choice to retrain our model. Once the user has chosen a building, the app will query WIkipedia for information about the buidling or statue and provide it to the user.

Challenges we ran into

We had a hard time figuring out how to upload a file from the device to our server. It easily took the most time out of the entire development period. We also ran into some issue's with Foursquare's API regarding the format of the JSON it outputs.

Accomplishments that we're proud of

We actually finished the app, which is the first time the four of us have finished a project together at a Hackathon. We were also proud that we weren't limited by our technical skill in building the app like we have in the past, but instead were limited by obscure errors in the different platforms we used and limits on the APIs.

What we learned

We learned how to implement and train a Clarifai model, as well as using that model to predict other images.

Share this project:
×

Updates