It is currently time-consuming for visual-learners to know what to explore within a given environment. Our bot provides navigational inspiration through images.

What it does

Magoo Bot first asks the user to send it a location, and then responds with geo-tagged images from Flickr. Magoo Bot filters faces so that the use receives relevant photos.

How we built it

Magoo Bot is a Facebook Messenger (and eventually Skype and other sources) that sends the user's location coordinates to the Microsoft Bot Framework. The Bot Framework compiles this into JSON response and sends it to your Heroku RoR server. The server retrieves geo-tagged images that are within the relevant near-by area from Flickr using the Flickr API. The RoR server then uses the Microsoft Face API for facial recognition & only returns to the Microsoft Bot Framework those images that do not have faces (i.e. not selfies). The Bot Framework then passes this data on to the user in a very clean carousel format.

Challenges we ran into

Some of the challenges we ran into, included the Instragram API, and a plethora of images with humans.

Accomplishments that we're proud of

Accomplishments that we're proud of: First of all, team cohesiveness. We have an excellent team. Secondly the facial recognition API.

What we learned

Some of the most important things we learned was the importance of cohesiveness, honoring all the contributed ideas, and honest discussion and feedback.

Additionally we learned how to make the call, interpret the response and effectively utilize the data we received to be better improve the user experience.

What's next for Magoo

So, there are two approaches to next steps for the Magoo Bot. The more immediate goals include utilizing Twitter and Instagram API’s. Additionally, we would like to filter the sensory experience options to include audio tagging. Future forward options include physicalizing scent and touch tagging as part of the mobile bot experience.

Built With

Share this project: