In a national wave of anti-racist activism that followed the police killing of George Floyd, many students at Rice University demand the removal of the statue of its founder, William Marshal Rice, because of his identity as a slave owner. This reminds us that artworks can spark many different sentiments, and connecting the viewers of arts would make art appreciation a more enjoyable experience. Thus, we set up to build a web application that allows users to post comments on artworks they see.
What it does
At the same time, we used the Vision API’s web detection functionality to detect and render out pages with matching or partially matching images to provide more information about the artwork. Feature No.2: Map View On the map, you can see your current location and use the map to find artworks nearby. If you see an artwork that hasn’t been marked on the map before, you can scan to create a new pin on the map.
How we built it In the map view, we integrated Google-Map-API, Firebase, Google-Vision-API, Google-Natural-Language-API, and Google-Youtube-Data-API.
Feature No.3: Comment on Artworks We also have a form where you can comment on multiple artworks. The unique function of this is that once you comment on an artwork, our Django backend automatically connects it to the artwork that you have scanned. This is a unifying factor for all people in the community because a backend Django database renders out all of the comments for each different piece of artwork every time someone comments on that piece of art. In addition, with the Google Natural Language API, the overall sentiment of all comments for a piece of artwork (across all users) is also shown after you scan the image. You can also share your own thoughts at the location of the artwork simply by clicking the satellite view and entering your comments.
Challenges we ran into
Learning to use Google Map API and integrating multiple features. When employing the current location feature, we found the code to be deprecated on insecure origins and had to use real time collaborative mapping instead. We had some difficulties when working with the Youtube-Data-API as we were very unfamiliar with processing JSON responses. Furthermore, since the limitation was 10,000 queries/day, we had to watch our quota max. Using Google Natural Language to unpack the sentiment analysis data took us a lot of time, but we finally completed the comment thread analysis.
Accomplishments that we're proud of
We are proud to have successfully integrated the Google Cloud APIs in our web application, especially because it was our first time working with Google Maps.
What we learned
What's next for Transparent
First, we plan to improve the accuracy of the Google Vision search by teaching the computer which portion of the text is more important for the search. We also want to build an algorithm that recommends nearby arts to visit based on popularity, distances, and the sentiment score.