When we travel to big cities, we often see local art that we're unfamiliar with, and most of the time the only information we have is a name. As freshmen college students we aren't familiar with many campus markers, so we’ve experienced this same feeling. With the rise of cultural sentiment and controversy surrounding artwork, it made us curious to see how other students feel.

What it does

Artwork can spark many different sentiments, and connecting the viewers of arts would make art appreciation a more enjoyable experience. Thus, we set up to build a web application that allows users to scan artworks, post comments on artworks they have scanned, and view hotspots where other viewers have scanned other artworks.

What Makes Us Different

Campus artwork rarely gets the recognition it deserves within the community or in the virtual space. By collecting user sentiment about various artworks, we bring various pieces of art to life with community voices.

Technologies Used API’s used:

Google Maps Javascript API, Google Vision API, Google Natural Language API, Firebase, Google Youtube Data API

Feature No.1: Artwork Recognition

With Transparent, yon can use your mobile camera to scan art. After the scan, the page displays comments from users in the community. Using the Google Vision API Sentiment Analysis, you can see the overall sentiment analysis for all student comments pertaining to that artwork. You can also see web resources and youtube videos(with hyperlinks and sentiment score for those yt comments) related to the artwork. Text extracted from the image is used to provide the search query value for the youtube videos.

How we built it

The Artwork Recognition was achieved by integrating Google-Vision-API, Google-Natural-Language-API, and Google-Youtube-Data-API. The frontend was written with html, css, and javascript. After the mobile camera takes in a scan, we used Google-Vision-API’s OCR-Text-Detection to extract the text in the uploaded image. After comparing the text to three keywords, we performed a YT search with that 3 keyword (e.g. William Marshal Rice). First we used the list search functionality, and then the list comment thread functionality (unpacking json responses). We stored the comment threads for each video in a python dictionary, and after that we extracted the sentiment score and magnitude for each comment thread.

At the same time, we used the Vision API’s web detection functionality to detect and render out pages with matching or partially matching images to provide more information about the artwork.

Feature No.2: Map View

On the map, you can see your current location and use the map to find artworks nearby. If you see an artwork that hasn’t been marked on the map before, you can scan to create a new pin on the map.

How we built it

In the map view, we integrated Google-Maps-Javascript-API and Firebase

Feature No.3:Comment on Artworks

We also have a form where you can comment on multiple artworks. The unique function of this is that once you comment on an artwork, our Django backend automatically connects it to the artwork that you have scanned. This is a unifying factor for all people in the community because a backend Django database renders out all of the comments for each different piece of artwork every time someone comments on that piece of art. In addition, with the Google Natural Language API, the overall sentiment of all comments for a piece of artwork (across all users) is also shown after you scan the image. You can also share your own thoughts at the location of the artwork simply by clicking the satellite view and entering your comments.

Challenges we ran into

Learning to use Google Map API and integrating multiple features. When employing the current location feature, we found the code to be deprecated on insecure origins and had to use real time collaborative mapping instead.

Accomplishments that we're proud of

We are proud to have successfully integrated the Google Cloud APIs in our web application, especially because it was our first time working with Google Maps.

What we learned

The usage of Google Maps-Javascript API as well as working on Firebase.

What's next for Transparent

First, we plan to improve the accuracy of the Google Vision search by teaching the computer which portion of the text is more important for the search. We also want to build an algorithm that recommends nearby arts to visit based on popularity, distances, and the sentiment score.

Built With

Share this project: