Lab results are almost always sent to us by mail, when we are far removed from our primary care physicians. Deciphering the acronyms and medical terms can be intimidating, and often we don't know how to approach these results. We attempted to make an interface in which a user can upload an image of their lab results and view them in a more digestible form.

What it does

Using optical character recognition, Lyvr delivers your lab results in laymen's terms. Using the OCR technology deployed from the PyTesseract suite as well as Google Cloud, we implemented a real time image to text process. Our ideal implementation has a scanning process which can uses our NLP algorithm in order to make sense of the report and present in a digestible package.

How I built it

The back-end was built with optical character recognition and a python script. The front-end was built with HTML/CSS and JavaScript elements. Our framework architecture is done with Python Flask and uses the power to run the machine learning back-end of this project.

Challenges I ran into

We had a very difficult time working with OCR and linking the responsive web page to the data we were processing from the image. We also had a few problems when we attempted to dynamically change the informational tiles on the web page.

Accomplishments that I'm proud of

The interface is simple to understand and the OCR works! People who are more involved in their health are bound to live more healthy lives, and we're hoping this app helps stimulate this involvement with a low barrier entry into understanding your own body and self.

What's next for Lyvr

Hopefully with more time and a better understanding of JavaScript, we will be able to add more helpful elements to our lab synthesizer. For example, patients would be able to enter family history and be alerted when certain levels of glucose/cholesterol etc. became too high.

Built With

Share this project: