In today’s world, there are shootings that take place. Quite often, the shooter is found out to be suffering from an anxiety disorder and it is too late to cure the shooter. Furthermore, parents, and family friends have no clue that their child or friend was even going through such mental health issues. Now, this isn’t something that only applies to shootings but also murder, suicide and much more.
To decrease the number of people suffering primarily through depression. However, realizing the detrimental impact we extended this app as a way to help disabled people as well.
What it does
The user provides permission for the app to continuously record audio. Each interval, the recording will be analyzed for signs of distress, and if distressed alert on user's screen will pop up as well as a an alert sent to their designated contact. The contact uses the same app, but when a distress alert is received, two buttons are shown either to contact emergency in which the safetrek api will be utilized to call the emergency officials, but use the location of the user and the other button to cancel the alert.
How I built it
Challenges I ran into
Inability to implement the facial recognition analyzer and audio analyzer.
Accomplishments that I'm proud of
As a beginner hacking team(first hackathon) we were able to develop a working demo app that can be shown on both iphone or android or computer.
What I learned
We learnt a variety of new skills and have improved skills. We expanded out knowledge about the cloud, machine learning, and react native for app development. Also, we improved as debuggers and team members.
What's next for CAPS
The Zignal processing library will allow us to further analyze the audio message by the user. The use of frequency and pitch can allow us to further understand the tone and emotion of the user. Cloud speech API and sentimental analysis. Integrate the SafeTrek API into the contact interface within the mobile application. Facial mood features analyzer to better detect distress