We all know how it feels to be in potentially dangerous situations. Whether it be walking around late at night, confrontational individuals, or even sketchy Uber drivers; many situations in life are not immediately dangerous, but may suddenly escalate at any moment.

So, in today's day and age, what options do you have? You can call someone, but that would require time to explain what's happening and they would have to stay on the line. Texting would also be difficult, as if the situation suddenly becomes dangerous, one's top priority is getting to safety, thus making it difficult to communicate. In general, although many solutions exist, none completely achieve what should be done:

1.) Getting the word out of a dangerous situation as it's escalating, allowing others to call law enforcement if necessary without disrupting the ability of a person to get out of an unsafe situation

2.) Sending location information in order to get help to the right place quickly

3.) Leaving a paper-trail of information behind (archived videos, GPS information, etc)

What it does is a mobile app for iOS devices that livestreams video and notifies the people of your choosing. This is achieved by having a user add individuals that they want to be notified when they go live. This would be the people that you want to know about your whereabouts, and would be able to look after you through the use of a livestream and GPS location.

Once the stream button has been clicked, a text message is sent out to all listed individuals containing your current GPS location and a link to watch the livestream.

This is what a typical text message would look like.

Here is where a user can add their family and friends to receive a notification upon starting a livestream.

Here is where the individuals in the contact list can view the livestream and archives.

How we built it

In order to achieve our live-streaming on iOS, we used LFLiveKit in order to output live video to RTMP compatible websites. For our backend, we used Amazon AWS and, more specifically, their "Amazon Media Live" service, which helped streamline the process for streaming in RTMP compatible format. All other functionality was completely built on XCode using Swift and supporting libraries in order to access miscellaneous data such as location information.

This is the view from the AWS Media Live dashboard. This is where the RTMP-compatible video is fed in.

This is a more in-depth view of the AWS Media Live dashboard, showing how our app is able to interface with AWS.

Challenges we ran into

This was the first time anyone in our team had worked on mobile app development, not to mention on such an ambitious project. Therefore, we consulted many individuals and mentors on what they would recommend we use. We originally started with Swift, but after trying for a very long time to get anything to come together, we were drawn to React Native based on a mentor's input and it's great live debugging capabilities. However, due to React Native being such a young framework, there was very little documentation to help get our feet wet and any existing examples tend to be not working due to several versioning issues such as CocoaPods. In the end, we decided to switch back to Swift. However, despite being our final choice, Swift had many initial kinks, as even simple tasks such as changing background colors and linking pages together was difficult due to Swift constantly updating and changing method signatures, thus rendering documentation and StackOverflow outdated.

In terms of front-end development, most of the challenges were getting Swift and XCode to play together nicely. Many of the packages and libraries we used were built on older versions of Swift, and so much of the libraries had to edited and overhauled to newer versions of Swift.Furthermore, since our app relied on live streaming from the device's camera, we were limited on which machines we could test these features, as this required connecting to physical devices (and we only had 2 working USB-C to Lightning Port cables).

In terms of back-end development, it was very difficult to discern what was necessary for our ambitious project to stream video live from an iOS device directly without the use of platforms such as YouTube Live, Twitch, etc. In the end, we decided on using Amazon Media Live, as this enabled us to both livestream video and have access to the videos after the fact. However, we spent a great deal of time reading into RTMP (Real-Time Messaging Protocol) and how to interface with RTMP and utilize stream keys. Many conflicting sources of information was available, and on top of that, a majority of our efforts to debug live streaming was caused by blocked ports on school wi-fi. After moving to using hotspots exclusively, this solved many firewall issues.

Accomplishments that we're proud of

We set out with a rather ambitious goal and were able to (mostly) meet our expectations and push ourselves out of our comfort zone. We would be lying if we said we didn't feel like quitting and giving up, but we were able to stick through with the help of several mentors (special thanks to Oscar Pan!) and the great environment that is a hackathon.

Getting video live-streamed from an iPhone directly to an RTMP-compatible outlet was an absolutely monumental milestone for us, and was the morale boost that we needed after spending several hours trying to work out network issues, language issues, versioning issues, the whole nine yards. Once we started to see results, we were even more motivated to achieve our goals.

Working with the whole Amazon AWS suite and RTMP streaming was also a giant milestone to overcome. Considering that we had to wade through 10-20 articles and tons of documentation in order to find the relevant information to get everything up and running, it was an absolutely amazing feeling to see it all come together. For example, when live-streaming to AWS, there is usually a RTMP address and a server key. However, these names are not universal, as many places will also use terms such as stream name, stream ID, and more. It was also unclear on how AWS read this information, whether it was formatted as address:key, address in one dialog, key in another, but we eventually learned that it would be appended after a slash.

What we learned

For the whole team, we learned about a whole completely different side of coding that we had never dabbled in before: mobile development. We learned about streaming to RTMP outlets, the difficulties of getting several libraries to play nicely with each other, and about Amazon AWS and how to interface with them. In addition to the many technical skills that we learned, we also learned many other important soft skills. We learned how to plan out a project, and to execute on it from start to finish. We struggled together, we persevered together, and we learned how to ask the right questions and to get the right help when it is available for us.

What's next for

If given more time, there are many areas that we would have loved to improve.

Additional Functionality:

Pattern / passcode lockout (to prevent false-livestreams i.e. children handling phones) One feature we could add in the future is a system to prevent people or their friends/children/pets from accidentally starting a livestream by forcing the user to input a passcode before the lifestream starts. Because these livestreams are often used in cases of danger, allowing the user to add a layer of warning before the lifestream starts would reduce the amount of times the user's close friends and family are falsely notified. This feature could be added into a settings page, and the user could be given the option to toggle this feature on or off depending on his or her preference.

Live GPS updates in a consistent manner Another enhancement we could add to is the ability to track the user's location in real time while their video is live streaming through a mini map window in the corner of the stream. This feature would be helpful as there could be cases where the user's location in the middle or end of the stream is far from the where he or she was at the beginning of the stream. This way, the viewers will always accurately know where the user is during all times of the stream.

Delay the cloud upload if no wifi or data signal Users may find that can come in handy in scenarios in remote locations where internet service is inconsistent or nonexistent. A potential feature of a delayed upload could bring a solution during these cases where the user cannot immediately stream their video. The idea is that the full video of the live stream will only be stored locally on the user's phone until the phone detects internet signal, in which the device will automatically begin the cloud upload.

Speech recognition for key words Speech recognition could be incorporated into for greater safety measures. The software could analyze the user's audio stream and listen for words that are commonly associated with a dangerous situation or emergency. For example, if the user says "Help!" during the stream, could send an additional urgent notification to the list of contacts. This feature would be very useful in emergency scenarios where the user is in need of immediate assistance.

Integration into Major Operating Systems We envision that the functionalities of can one day be integrated into operating systems as a standardized safety feature. In that case, there could be many new features added, one of the most important ones being a quick gesture in the lock screen or on the phone's sleep screen that automatically launched and began video recording. This complete integration allows for the the user to start streaming even quicker, a few extra seconds that could potentially save a life.

Built With

Share this project: