Here is the java code

Just go to the link and copy and paste the code: https://docs.google.com/document/d/1m0rVzaiMqlzD5ewZFhqMt2OEQZsVWAwG9p4GVzMnnlM/edit

Inspiration

This app is important to us since we believe it can help save lives. Especially in an emergency situation where if someone is currently experiencing an overdose, somebody could use the detect symptoms feature to identify an overdose. Then they could use the chatBot to call for help and ask for more information.

What it does

Our goals to reduce drug use was to address loneliness in drug users, the stigma against drugs, and the issue of repeated overdoses. Our other goal was to use the Google Vision API for the feature to detect drug overdoses.

People are alone when they overdose. So notifications go off every few hours to make sure the user is safe and healthy. It uses a 3 strike system: 1st strike = calls a help line. 2nd strike = automatically calls a friend to checkup on you. 3rd strike = automatically calls 911. The chatBot also talks to users to keep them company or call for help.

We address social factors like stigma by having the chatBot properly inform people of drug use symptoms. By normalizing this topic, people may feel more open to talk about their personal struggles.

People who overdose are likely to do it again. But different physical symptoms of different types of overdoses can be detected using the Google Vision API. If these images were taken from a camera, one could determine if someone is currently experiencing a particular overdose, then they could ask for help from the chatBot.

How we built it

We used Google Vision API, Google Dialogflow, and java.

Here are the different menus: Welcome: chatBot or detectSymptoms. chatBot: how are you, help, or notif. Help: information, friend, 911, or helpline Notif: get hours passed or add an hour detectSymptoms: n/a.

*note: the notifications are sent every 3 hours apart. To speed up time to get the idea of the notification functionality just add an hour.

Challenges we ran into

From the front end, there was a movement back and forth between Android and iOS applications. As they were both difficult to interpret, the process of combining the UI and the code became a longer process. We also had challenges with Google API's. As it was our team's first time using the technology, we had to each learn the proper format and slowly go through the appropriate steps to get to our objective. The next issue we encountered with the API's were the integration with the front end of our project. This created an issue as it extended the understanding of the Andriod and iOS app IDE. From the back end, we struggled putting everything together.

Accomplishments that we’re proud of

Trying something new and getting a lot done in the time allotted to us.

What we learned

From this experience, our team has primarily gained a better understanding of communication. As this hackathon was hosted through discord (virtual), we each had to do our part to make sure there were continuous interactions with our project and we were discussing the problems we encountered. The workshops also taught us a lot.

What's next for Envision

How we plan on improving the app is using the Google Maps API to find the nearest help centers to arrive when calling 911. We could also use the Text to Speech API for the chatBot. For the notifications, we could ask what time(s) of the day to send a notification and it would use the real time.

Built With

Share this project:

Updates