In a national wave of anti-racist activism that followed the police killing of George Floyd, many students at Rice University demand the removal of the statue of its founder, William Marshal Rice, because of his identity as a slave owner. This reminds us that artworks can spark many different sentiments, and connecting the viewers of arts would make art appreciation a more enjoyable experience. Thus, we set up to build a web application that allows users to post comments on artworks they see.

What it does

Feature No.1: Artwork Recognition ** With Transparent, yon can use your mobile camera to scan the label for the artwork. This would be identified using Google Vision. After the image is uploaded, the page renders out overall sentiment scores, other visitors’ comments, web resources, youtube videos(with hyperlinks and sentiment score) related to the artwork. Text extracted from the image is used to provide the search query value for the youtube videos. There is also a sentiment analysis for each of the comment threads (provides a helpful quick summary when the comment threads are very long). **How we built it The Artwork Recognition was achieved by integrating Google-Vision-API, Google-Natural-Language-API, and Google-Youtube-Data-API. The frontend was written with html, css, and javascript. After the mobile camera takes in a scan, we used Google-Vision-API’s OCR-Text-Detection to extract the text in the uploaded image. After comparing the text to three keywords, we performed a YT search with that 3 keyword (i.e. William Marshal Rice). First we used the list search functionality, and then the list comment thread functionality (unpacking json responses). We stored the comment threads for each video in a python dictionary, and after that I extracted the sentiment score and magnitude for each comment thread.

At the same time, we used the Vision API’s web detection functionality to detect and render out pages with matching or partially matching images to provide more information about the artwork. Feature No.2: Map View On the map, you can see your current location and use the map to find artworks nearby. If you see an artwork that hasn’t been marked on the map before, you can scan to create a new pin on the map.

How we built it In the map view, we integrated Google-Map-API, Firebase, Google-Vision-API, Google-Natural-Language-API, and Google-Youtube-Data-API.

Feature No.3: Comment on Artworks We also have a form where you can comment on multiple artworks. The unique function of this is that once you comment on an artwork, our Django backend automatically connects it to the artwork that you have scanned. This is a unifying factor for all people in the community because a backend Django database renders out all of the comments for each different piece of artwork every time someone comments on that piece of art. In addition, with the Google Natural Language API, the overall sentiment of all comments for a piece of artwork (across all users) is also shown after you scan the image. You can also share your own thoughts at the location of the artwork simply by clicking the satellite view and entering your comments.

Technologies Used API’s used: Google Maps Javascript API, Google Vision API, Google Natural Language API, Firebase, Google Youtube Data API

Challenges we ran into

Learning to use Google Map API and integrating multiple features. When employing the current location feature, we found the code to be deprecated on insecure origins and had to use real time collaborative mapping instead. We had some difficulties when working with the Youtube-Data-API as we were very unfamiliar with processing JSON responses. Furthermore, since the limitation was 10,000 queries/day, we had to watch our quota max. Using Google Natural Language to unpack the sentiment analysis data took us a lot of time, but we finally completed the comment thread analysis.

Accomplishments that we're proud of

We are proud to have successfully integrated the Google Cloud APIs in our web application, especially because it was our first time working with Google Maps.

What we learned

The use of various Google APIs Integrating Javascript with backend database Utilizing HTML, CSS, and JS for frontend

What's next for Transparent

First, we plan to improve the accuracy of the Google Vision search by teaching the computer which portion of the text is more important for the search. We also want to build an algorithm that recommends nearby arts to visit based on popularity, distances, and the sentiment score.

Share this project: