Inspiration

Both members of the team have connections and experience in the health care field, and wanted to work with machine learning technology. Creating a tool to identify household medicine seemed like an achievable foundation for a project which could become much more.

What it does

Compares photographed medicine with the medicines included in the model.

How I built it

Using Google Cloud's AutoML to generate the model, react native was used to create an android/ios interface, which compares with the model using TensorFlow.js.

Challenges I ran into

Understanding the data structures associated with machine learning provided challenging, as both team members were novices in this area of expertise. Likewise, we needed to retrain the algorithm after early testing, which took up an immense amount of time.

Accomplishments that I'm proud of

The labeled datasets were partially collected with a data scraper, for the initial training of the algorithm. Thousands of additional photos were later added, taken in various environments around one of the team member's houses. This resulted in significantly higher accuracy for the medicines which were photographed (although, unfortunately this only includes 5 of the 20 medicines in our data set).

What I learned

Effective machine learning takes a lot of time, ideally this data should be collected as early as possible in a hackathon or time limited event. The majority of the coding should be done while the model trains, rather than before.

What's next for Medentifier

We'd like to add user-photographed images to our ML database, as well as provide information to the user when it is not clear which medicine the user has photographed. In these edge cases, we'd like to prompt for additional information to aid in identifying the medicine (such as prompting the user to place a quarter for scale, or request the user enter the numbers on the pill).

Built With

Share this project:

Updates