The user opens the app and has two options. They can either choose a picture from the gallery or take a picture. Once the user is satisfied with their chosen image they can press the GO button and the image will be processed used Microsoft Cognitive Services, Computer Vision, and this will return Tags which are related to the picture. The software will then take these tags and work with the built-in Spotify API to retrieve songs related to the image. The user can scroll through all of the songs and play them through the app.
Computer Vision -Microsoft Cognitive Services
The team started off with creating Keys for access to their Microsoft services. The software was written so that it pulls the image from the gallery of the system so it requests permission to the phone in the Manifest. The picture is called by using an Intent so the task of getting a picture is delegated to other applications on the phone. Once the picture is selected the image is sent through the Microsoft system where it is converted through a Byte array and processed. Then we selected it to return Tags, which are returned as a list so using a for-loop we ran through the tags and displayed them and fed them to the Spotify Portion of the code.
Using the Spotify Web API we facilitated retrieving relevant song tracks based on our tags from the computer vision process. We used an open-source Spotify Android wrapper to access the web API in our native Android environment and receive the song IDs. We then embed the Spotify tracks into the app using a Webview and HTML iframe.