People often have medical concerns that they do not choose to investigate. Wether it be due to fear, lack of time or lack of awareness such unexplored medical qualms can often lead to devastating consequences. For example, Melanoma (The most dangerous form of skin cancer), is preventable almost 90% of the time. Not only so, but almost all cases are curable if identified at early stages. Our goal was to create an app that could help people quickly and easily test for any visible symptoms of possible diseases they might suspect they have, being extensible enough to allow support for any visible disease with an appropriate data set.
What it does
The app takes a photo of the specific body area and sends the image to a webserver hosted on Azure. This then sends the image to the appropriate trained ML model also on Azure, which returns weather or not the user is likely to have the disease. If they are likely, the app will provide info on the disease, other symptoms to look out for that may indicate they have it, and the address of their closest medical practitioners, highly recommending they visit them.
Challenges We ran into
The hardest part was finding good enough data sets to support diseases, for instance Scoliosis. We decided to focus on one disease and construct the whole pipeline for this one disease first that we did have data for, and focus on adding support for more later. We chose cancerous skin moles. However, due to time constraints we only were able to construct the full pipeline for this one disease, and not add support for more diseases too. However, the app is extensible enough that nothing really needs to change to support more diseases - just have an additional trained model (using the same regression and classification algs as the first) and the additional options added to the frontend.
Accomplishments that We're proud of
We are very proud to have completed a full working proof of concept for one specific disease, showing that our idea works and can be applied to any disease we can find data for.
What We learned
We learnt a lot more about how ML and computer vision work, the requirements of good data to feed to your models etc, as well as how to do effective frontend development for android apps using JS.
What's next for Medi-i
We certainly want to add support for as many diseases as we can to help as many people as we can. Further from this too though, fine tuning the model's predictions with further image pre-processing and more training data, as well as polishing the UI to make it more user friendly and appealing.