iResQ Video Scanner
iResQ Dashboard Client
The occurences of the natural disasters over the years have inspired us to create an application to assist in future disasters. After thinking a lot, we came to realization that one of the biggest problem after a natural disaster hits is to identify the people who are stuck and alive somewhere and have no where to go except reaching out for help. This is can be most common in case of flooding when there is water all around. The need for all those requirements inspired us to build iResQ which acts as a “friend in need” in someones hardest times.
What it does
The idea is to help the people survived but stucked during natural distaters by identifying affected areas through aerial imagery. Our idea is to capture the real time video with help of a drone and search for any important things either written or some sign of help people have posted to get the attention. We capture the video real time and process it through the machine learning model to identify characters or special signs. The information is then displayed over the map with its GPS location so that the relief parties can reach there asap.
How we built it
Web application was built using NodeJS, Webpack, HTML5 and CSS. We used Google's Machine Learning to train and extract the relevant information from the series of pictures.Used Heroku for deployment. We used mobile phones as our drone to capture the real time video and our algorithm parse the video as a series of images before passing to the machine learning model and extracting the relevant information such as text, help signs to show them on Here Maps with their GPS location.
Challenges we ran into
Its was a daunting task to turn mobile phone in to a real time webcam. Keeping the application simple and easy to use.
Accomplishments that we're proud of
Its works and looks decently aesthetic.
What we learned
Deployment on Heroku Google's Machine Learning APIs Integrating together different components. Displaying information on Here map.
What's next for iResQ
We are planning to extend this to detect other signs along with text, for example if there is a huge fire broke out somewhere, it will be able to detect that in an inception period so that it can be controlled earlier on.