Inspiration

Journalists often times are unable to immediately report critical information in crisis situations leading to a pseudo information blackout. With our project, we hope to minimize the time first responders have to wait to get reliable actionable information in order to act by utilizing sources on the ground (i.e twitter users). By saving valuable seconds, we increase the likelihood of the population getting help sooner and, in some situations, even saving lives.

What it does

Using the Twitter API, we developed a deep learning model that streams real timeTweets from crisis zones and uses an ML model to come up with a validity score to prevent misinformation in mission critical situations.  This information is disseminated to first responders such that they are able to effectively deploy their resources and handle the situation.

Example of an api request: https://crisis-response-codechella.herokuapp.com/request/location=-6.38,%2049.87,%201.77,%2055.81&keywords=united&languages=en

Format of a GET request for developer integration and frontend querying based on user parameters:

crisis-response-codechella.herokuapp.com/request/location=string:location&keywords=string:keyword&languages=string:languages'

How we built it

We developed a machine learning model in Python and the front-end in React and JavaScript. Using a flask backend to serve an API based on requests specified by a first responder front end user, we fetched tweets from the real time tweet stream, calculates our validity score (confidence in the validity of the tweet) and returned the tweet ID and score to the frontend to further embed tweets in the page using the twitter webAPI.  We deployed using heroku.

Challenges we ran into

All team members are located in different timezones, so coordinating was a hard task and two of our team members were unable to partake midway. Thankfully we were able to collaborate and come up with a solution.

In terms of technical challenges deploying a ML model turned out to be tricker than expected due to size and memory constraints on production servers. Heroku servers weren't able to handle the memory load of a full tensorflow model making it hard to evaluate tweets as an online model, an interesting problem to solve that we addressed by using a tensorflow cpu build over a gpu build.

Accomplishments that we are proud of

Firstly, the Machine Learning model built in Python was returning validity scores that fairly reflected the tweets validity we tested this using inaccurate tweets. We were glad we were able to create our own API endpoint to serve this information in a concise way to clients sending requests and in doing so providing a service for first responders to better deploy their resources in crises.

What we learned

We learned a lot of collaboration and technical skills. Some of us never used the tech stack before and we still managed to complete the project successfully. 

What's next for Crisis Aversion

If we had more time, we would have implemented a text notification system that collects the most important information and sends it to users’ phones. For this feature, users first would have to opt-in and select locations of interest and displayed the tweet clustering on a heat map so first responders could identify hot spots that are most critical for them to get to. By using the internal geotags on a tweet and a map representation algorithm we can display the info in an intuitive way to first responders.
Share this project:

Updates