The beautiful and Instagrammable ceiling of Caltech's auditorium.
Experts estimate that out of the 1.2 trillion photos captured in 2017, 85% were taken on a smartphone. Undoubtedly, the Digital Age has revolutionized the way we consume media. Today, cameras are more accessible than ever, and photography dominates all social media platforms. Over the years, we've witnessed a dramatic shift from private, commemorative photos to the public and impulsive. As photography becomes increasingly spontaneous, people need an efficient way to determine hashtags for their photos without wasting a minute.
What it does
Hash It combines Google Cloud's powerful Vision API with a tastefully minimalistic interface to provide appropriate, real-time hashtags. Our app aims to lessen the amount of time you spend documenting your life by doing the thinking for you.
How we built it
Hash It was primarily made using React Native. We utilized the Google Cloud Vision API with the credits we received courtesy of MLH. We also incorporated Google's Firebase API for the backend and database functionality.
Challenges we ran into
This hackathon was particularly challenging because none of our team had much experience programming in React Native or API integration. Throughout these 36 hours, we struggled with foriegn scripts, unfriendly permissions, finicky installations, and other daunting procedures.
Accomplishments that we're proud of
As novices, we take pride in knowing that we pushed ourselves to delve into the challenges we chose for ourselves and persevere. We were also proud of our creativity even though our ambitions were beyond the scope of a 36-hour competition.
What we learned
Along this journey, we learned how to integrate various APIs, how to Google, and not to hesitate to ask for help from an experienced mentor. Most of all, we learned that React Native can be unforgiving at times.
What's next for Hash It
At the start of the competition, Hash It had big plans. Unfortunately, we didn't get to implement every feature we had hoped to. In the future, we hope to integrate it with the Google Street View API so that when a user clicks on a particular location, the corresponding building's pictures are scraped from the web and are ran through Google's Vision API to provide a general description of the location.