- Veer Gadodia
- Nand Vinchhi
Email for contact - email@example.com
Have you ever walked past a monument or landmark site, wondering what it is and why it was built? Many questions pop into our heads, leaving us with a longing for more information.
Furthermore, the COVID-19 pandemic has completely shattered the world, and the travel industry especially has been one of the hardest hit, with people not being able to travel, whether it’s for seeing family or personal growth and enrichment or discovering new cultures. Many people can no longer afford expensive tour guides in this time of economic hardship, not to mention the transmission risk of the virus.
When brainstorming, we were very clear that we wanted to create an app that would be most useful to people in real life, given the current circumstances. We believe that machine learning can be used to educate people about the important historical sites they visit and provide them with a simple interface to make travelling more informative and safe, as it requires no person-to-person contact, compared to large clusters of people travelling with a tour guide.
What it does
Landmark is a mobile application that uses camera vision to instantly recognize monuments and historic landmarks you have either taken or uploaded a picture of, and finds relevant and detailed information about it with external links, in the form of an easy-to-read flash card. The user can take the picture in real time at the site, as well as upload any previously taken picture, which is an especially common scenario and use case, as people often yearn to recount information or learn more about a site they visited in the past.
Landmark also provides a maps page, which seamlessly integrates with google maps features, and instantly finds the nearby monuments and historical sites that the user can visit, as well as information about these sites.
How We built it
- React-Native for mobile app front-end
- Flask for back-end and database + API integrations
- MongoDB Atlas for storing user data.
- Google Cloud Vision for camera vision model.
- Tensorflow.js for another image classification model (we didn’t use this in our end product as Google cloud vision performed with higher accuracy)
- Wikipedia API for getting flash card data.
- Google places API for finding nearby sites
- Figma for prototyping and UI design
- Heroku to host back-end endpoints.
Challenges We ran into
The major challenge we faced was integrating react native with google cloud vision. Encoding the image into base64 was a major challenge, which took us nearly 2 hours to get over. The other challenge we faced was getting the wikipedia APIs to work properly.
Accomplishments that We're proud of
We are proud of creating a novel app that actually works at production quality. We are proud of integrating google cloud vision APIs, as well as google places, and wikipedia APIs. We are also proud of creating a responsive and clean UI, as we are relatively inexperienced in React Native.
What We learned
We learnt how to use the Google Cloud Vision API, Google Places API, Wikipedia APIs, MongoDB Atlas, and react native, given it was the first time our we had used react native in a real project.
What's next for Landmark
We want to make a more sophisticated system for getting the image for a monument. We would like to migrate our back-end to NodeJS for robustness and scalability, as well as host it via AWS or Azure, instead of Heroku. We would like to make the app more friendly to Android and older IOS versions. Lastly, we also want to solve any minor bugs and release this app on the app store and google play store.