Presentation + Award

See the presentation and awards ceremony here: https://www.youtube.com/watch?v=jd8-WVqPKKo&t=351s&ab_channel=JoshuaQin

Inspiration

Back when we first came to the Yale campus, we were stunned by the architecture and the public works of art. One monument in particular stood out to us - the Lipstick (Ascending) on Caterpillar Tracks in the Morse College courtyard, for its oddity and its prominence. We learned from fellow students about the background and history behind the sculpture, as well as more personal experiences on how students used and interacted with the sculpture over time.

One of the great joys of traveling to new places is to learn about the community from locals, information which is often not recorded anywhere else. From monuments to parks to buildings, there are always interesting fixtures in a community with stories behind them that would otherwise go untold. We wanted to create a platform for people to easily discover and share those stories with one another.

What it does

Our app allows anybody to point their phone camera at an interesting object, snap a picture of it, and learn more about the story behind it. Users also have the ability to browse interesting fixtures in the area around them, add new fixtures and stories by themselves, or modify and add to existing stories with their own information and experiences.

In addition to user-generated content, we also wrote scripts that scraped Wikipedia for geographic location, names, and descriptions of interesting monuments from around the New Haven community. The data we scraped was used both for testing purposes and to serve as initial data for the app, to encourage early adoption.

How we built it

We used a combination of GPS location data and Google Cloud's image comparison tools to take any image snapped of a fixture and identify in our database what the object is. Our app is able to identify any fixture by first considering all the known fixtures within a fixed radius around the user, and then considering the similarity between known images of those fixtures and the image sent in by the user. Once we have identified the object, we provide a description of the object to the user. Our app also provides endpoints for members of the community to contribute their knowledge by modifying descriptions.

Our client application is a PWA written in React, which allows us to quickly deploy a lightweight and mobile-friendly app on as many devices as possible. Our server is written in Flask and Python, and we use Redis for our data store.

We used GitHub for source control and collaboration and organized our project by breaking it into three layers and providing each their separate repository in a GitHub organization. We used GitHub projects and issues to keep track of our to-dos and assign roles to different members of the team.

Challenges we ran into

The first challenge that we ran into is that Google Cloud's image comparison tools were designed to recognize products rather than arbitrary images, which still worked well for our purposes but required us to implement workarounds. Because products couldn't be tagged by geographic data and could only be tagged under product categories, we were unable to optimize our image recognition to a specific geographic area, which could pose challenges to scaling. One workaround that we discussed was to implement several regions with overlapping fixtures, so that the image comparisons could be limited to any given user's immediate surroundings.

This was also the first time that many of us had used Flask before, and we had a difficult time choosing an appropriate architecture and structure. As a result, the integration between the frontend, middleware, and AI engine has not been completely finished, although each component is fully functional on its own. In addition, our team faced various technical difficulties throughout the duration of the hackathon.

Accomplishments that we're proud of

We're proud of completing a fully functional PWA frontend, for effectively scraping 220+ locations from Wikipedia to populate our initial set of data, and for successfully implementing the Google Cloud's image comparison tools to meet our requirements, despite its limitations.

What we learned

Many of the tools that we worked on in this hackathon were new to the members working on them. We learned a lot about Google Cloud's image recognition tools, progressive web applications, and Flask with Python-based web development.

What's next for LOCA

We believe that our project is both unique and useful. Our next steps are to finish the integration between our three layers, add authentication and user roles, and implement a Wikipedia-style edit history record in order to keep track of changes over time. We would also want to add features to the app that would reward members of the community for their contributions, to encourage active participants.

Share this project:

Updates