Update: this project was chosen as a finalist in HackUPC Fall 17 hackathon!

Presentation 1

Presentation 2


New technology approaches are being implemented to improve people's quality of life, and e-health is one of the promising fields which will make an impact into everyday living. Healthcare innovation could make diagnosis and medical recommendations more accessible to all human beings.
Therefore, new connected health assistance services could be a common thing in the future, implementing new AI & machine learning solutions. Our chatbot aims to put all these innovations at the service of society.

What it does

FirstAId is able to identify several diseases from a picture provided by the user. It also provides the user with useful information, such as possible treatments and charts showing recognition statistics.
It is able to recognize user-specified symptoms in order to improve disease classification.
It comes with an assistant which provides nearby healthcare centers and their location where the detected disease can be treated.
The user can subscribe to different city alert channels and will be warned whenever a new viral infection has been reported at that city. This can be revoked at any time. In addition, users can report those viral infections so other users get notified immediately.

How we built it

The image disease detection/classification system runs on Mobilenet, a class of convolutional neural networks (CNN) which is lightweight, resource-friendly and aimed to provide reliable results even with relatively small datasets.
Inception_v3 (another CNN) has also been tested, but resource/accuracy trade-off is better with Mobilenet.
Various datasets were built from medical scientific publications and from some scraping too.
The CNN was then retrained to classify several diseases, reaching a 84% accuracy on average.
The chatbot is built with Telegram Bot API. We chose an architecture based on microservices, so we used Docker for that and deployed it using an AWS EC2.
Python was the main programming language.

Challenges we ran into

Since there are only a few, and limited, image datasets available, we had to make different datasets and to experiment with different CNN retraining parameters.
In addition, the datasets were quite small, so serious data augmentation had to be applied in order to improve accuracy and prevent overfitting.
Some diseases are similar in appearance, and the CNN was having trouble when classifying them. This was (almost) solved by unifying some quite-similar diseases and using user-provided symptoms to decide the final definition. This also improved general CNN accuracy thanks to less categories, more images per category.

Accomplishments that we're proud of

First of all, being awarded with a finalist position in HackUPC Fall 17 is very gratifying.
We were able to submit a solid project, with several features. We are proud of the general accuracy and reliability of the image diagnostic.
We generated some Tensorboard plots, which were very useful for adjusting experiment parameters and comparing performance between datasets. We were able to compare CNNs to choose the one that fits better to our needs.

What we learned

We are now concerned about the importance of data augmentation techniques in order to mantain reliable CNN predictions with not-so-huge datasets. We hadn't worked before with image recognition and this was an exciting challenge to solve.

What's next for FirstAId

We would like to incorporate more diseases to the project in order to expand the user base, and a bunch of new features such as a color blindness test.

Built With

Share this project: