Inspiration
I faced the problem of having to change my lifestyle, including diet, after having a health problem. Trying to find a simple way to embrace a diet adapted or aligned to my specific problem was not very straightforward. Between diverse solutions from Google search, to expensive applications and miracle programs, none was adapted to me and easy to be put in place. So It saw a great possibility to create a bot / agent able to provide a solution based on my health records and lab results.
What it does
It is a simple chat able to generate a diet plan based on information contained in PDF or pictures taken from my healthcare provider.
How we built it
A Gradio interface to interact with the user questions and handle documents provided by him. To analyze and generate an outcome, the document are handled by a OCR model (for images and/or PDF files) which provide the context to the LLM in order to find a solution and provide a diet plan.
Challenges we ran into
OCR Analysis and the right context to be passed to LLM. Initially, a fine-tuning of a small model was the goal, but it has been discarded due to the timeline as well as bring unnecessary complexity to the product.
Accomplishments that we're proud of
Integrating OCR analysis in the pipeline.
What we learned
Gradio, OCR models and HP AI Studio
What's next for Meal adviser
Add authentication and account management. Securing user documents in the cloud Enhance the conversational mode
Log in or sign up for Devpost to join the conversation.