We believe that a picture tells a thousand words. Using images from social media, we aim to analyze how people are feeling across different regions.

What it does

Vibe mines images from Twitter and analyzes them to determine emotions. Aggregating the images in a region, Vibe produces a visualization of the moods of specific areas .

How we built it

  • Microsoft Project Oxford Computer Vision API's and OpenCV's ORB algorithm to process images into data vectors
  • Microsoft Azure Machine Learning to train a classifier that would be able to predict an emotion given processed image data
  • TwitterSearch to mine daily images per region and used our classifier to find the emotions in each region
  • mapbox.js and Bootstrap to build the front-end and visualize our data.

Challenges we ran into

  • Not enough time to improve accuracy by gathering a much larger dataset to train our classifiers on
  • Not enough time to improve accuracy by testing out different machine learning models
  • Figuring out how to extract features from an image as none of us have extensive knowledge of computer vision
  • Building a visually appealing and interactive map

Accomplishments that we're proud of

  • Figuring out how to extract information from images
  • Creating a machine learning algorithm to make predictions on images
  • Our front-end design

What we learned

  • Vastly improved our front-end development skills
  • How to perform machine learning experiments in Microsoft Azure
  • How to use various Project Oxford API's

What's next for Vibe

  • Build a more robust and scalable framework for data processing and classification
  • Improve prediction accuracy of classifier
  • Add more features to the map and website to allow users to get a better sense of the vibes around them

Deployed website

Share this project: