Example of search query that would be flagged for potential mental health concerns.
Text message that notifies one of potentially worrisome search queries. The user response is used to refine the machine learning algorithm.
The indications that someone may suffer from a mental illness are often not recognized due to their loved ones lack of experience recognizing warning signs or even self denial. By providing this service, we hope to foster an open discussion between parents and children about ways one can detect mental health issues as well as how to get help.
What it does
Torch is a Chrome extension that would run an algorithm on search queries in real time in order to detect signs that a person may be considering harmful behaviors or exhibit other symptoms of mental illness. It would then send a text message to an appropriate emergency contact regarding flagged search queries.
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
We plan to expand this idea to other forms of media, for example, in group messages. We also hope to design a multi-user web platform that would display data in a clearer format, and to move the server to the cloud so that the service would be accessible to more people.
We'd like to thank all the organizers and mentors of LA Hacks and the sponsoring companies. Without them, we would probably still be stuck debugging our code! We'd also like to give a special shoutout to Matt Reyes, one of our mentors, for contributing much of his time helping us develop our machine learning algorithm.