Inspiration

When exploring the resources available for the hackathon, we discovered the Google Cloud Vision API and were very impressed by its capabilities. So, we wanted to learn more deeply about how to use it and decided to do so by way of a more user-friendly app.

What it does

The user either takes a photo through the app, which redirects to the phone's camera, or chooses a photo from their camera roll. Then, the app analyzes the picture using Google Cloud Vision and outputs a listing of relevant labels of subjects in the photo. Those labels are also links to their corresponding Wikipedia pages.

How I built it

We researched Google Vision, how to link a camera to the app, how to access the camera roll, and how to integrate these features with Android Studio.

Challenges I ran into

Using the Google Vision required us to use JSON files and HTTP POST, which we had no experience in, which made it a little difficult to use.

Accomplishments that I'm proud of

Integrating a camera, access from the camera roll, and Google Vision all into one app.

What I learned

We learned how to use Android Studio (none of us knew how to use it beforehand) and the Google Vision API.

What's next for HawkSight

A potential feature could be translating the labels into different languages.

Share this project:

Updates