Inspiration

In 2017, there were 651,135 reported missing person cases. We wanted to find a unique and efficient way to help reduce this number and prevent kidnapping, assault, human trafficking, and other crimes by creating an app that allows someone to quickly alert authorities using a safe word.

What it does

Sayfe records audio and takes a picture. The app records an audio clip, takes a picture from the front-facing camera, and tracks their location. After some data processing and analyzing, Sayfe decides whether the victim is in no danger, possible danger, or immediate danger. This information is sent to the web-app which is used by authorities. Our web-app then displays geolocation pins on a map for law enforcement to view. When these pins are clicked on, the profile of the victim is shown with the image file taken from the camera and the assessment of their danger level, and the audio file of their safe word is played. Web app display

How we built it

Sayfe works by constantly listening for a trigger through audio detection, in our case the user's "safe word". Once the user says their safe word, the app will automatically record an audio file and take a picture from the front-facing camera. The app pushes this data to the Firebase Cloud Storage. When this data is pushed to Firebase Cloud Storage, a Firebase function figures which sends a Firebase cloud message to the client. This prompts the client to update its map with new markers, representing the location of the recording. When this marker, stored through Leaflet GeoJSON information, is clicked on, then we locate the profile information from the Firebase Firestore database. We then download the audio data and convert it to text using Googles Speech To Text API. We then use Google Cloud Natural Language processing to detect if the victim is in danger. We do the same with the picture using Google Vision API to analyze whether the victim is in no danger, possible danger, or immediate danger. Our client is updated to display the profile of the victim, the audio and image file, the danger level assessment, and their geolocation.

Audio analyzing section of our machine learning code.

Challenges we ran into

Few of us had experience with machine learning or Google's Machine Learning APIs, so a big challenge for us proved to be analyzing our data accurately to prevent misreporting of the user's danger. Another challenge we faced was the transcribing of data, to be usable for the Google-Cloud APIs. A complication that arose with Firebase was sending audio over to the Firebase database utilizing iOS but we eventually learned that we had to use meta data.. One of the issues we ran into trying to retrieve data from our google-cloud bucket.

Accomplishments that we're proud of

We are extremely proud to have successfully completed our vision for our app and get a working system that can be used to help prevent abduction and other similar crimes. We are enthused by the dedication our members put into learning new skills, which resulted in us being able to create an app that is different from anything we've created before.

What we learned

We come from a wide range of skill levels and all individually learned a lot from this project. The biggest learning objective was the utilization of Google's cloud platform. We figured out how to effectively convert data and accurately analyze it using Google's machine learning platforms and models. Some of the Google cloud APIs we utilized.

What's next for Sayfe

We hope Sayfe could be implemented into law enforcement web systems and user's phone applications to help reduce the number of missing persons. Looking to the future of Sayfe, we want to work on the accuracy of our machine learning. With a more reliable ability to assess danger, Sayfe could be more efficient in alerting dangers to authorities and prevent kidnapping, assault, human trafficking, and other crimes.

Building on this idea of processing many danger assessments, we would also work on a more precise evaluation to help law enforcement filter situations and deploy teams orderly on the conditions that the assessments are valid. We also think that expanding this idea to multiple audio and web connecting devices as well as streamlining the machine learning to handle more of these devices would be useful in our mission of people protection. This would help people feel more at ease in their environments at any time, for example, even if they were just out running with a smartwatch.

We are enthusiastic our project will resolve safety concerns and ultimately reduce the number of missing persons.

Built With

Share this project:

Updates