Inspiration

A lot of the times we get injured and we get "stuck", not sure what to do. This is especially prevalent in an emergency situation such as natural disasters, etc... During these situations its hard to know what to do, even for mundane things such as minor injuries. Our hope is to mitigate the amount of resources spent on addressing these smaller issues by providing an AI solution to help those in need.

What it does

Pocket Doc runs a rough diagnosis based on an image taken from within the app of an injury and tells the user what it thinks it is as well as several resources to deal with the injury.

How we built it

We wrote a mobile client in React Native first and visual recognition in IBM Watson. We then connected the two with a Flask app deployed on Google App Engine that processed and forwarded requests from the client to IBM and back.

Challenges we ran into

We were new to both App Engine and IBM Visual Recognition so we faced a lot of difficulties trying to understand and use both. We also had a lot of trouble fine-tuning the diagnosis model to make it more accurate.

Accomplishments that we're proud of

We managed to build the entire diagnosis query pipeline with new services that we were excited on using.

What we learned

We learned how to deploy to App Engine and use IBM visual recognition along with debugging POST requests.

What's next for Pocket Doc

We hope to make the ML more reliable and also add in more features such as communication with EMS on serious injuries and community sharing of medicine.

Share this project:

Updates