Inspiration

the idea was inspired from one of our friends. He family member had almost died from taking 2 different meds that didnt interact well.

What it does

:Meet CareMate—your personal health assistant, powered by Google AI. We understand that life moves fast, and keeping track of safe medication practices can easily slip through the cracks. That’s why we created CareMate, an AI-driven app that brings peace of mind right to your fingertips. With CareMate, checking for safe medication use is as simple as taking a picture. Just point your phone at the medication you’re about to take, and our app instantly provides clear, easy-to-follow advice on possible interactions with other meds, foods, or drinks. Whether it’s grandma’s daily pills or something for the kids, CareMate has you covered. In addition to using Google’s Gemini model, CareMate has been fed the world’s best pharmaceutical knowledge, delivering trustworthy and personalized insights for every combination. Say goodbye to guessing and hello to confident, informed decisions—for you and your loved ones. With CareMate, staying safe just got a whole lot easier.

How we built it

CareMate is designed to make medication safety easier by helping users check for potential interactions between meds, foods, and drinks. Behind the scenes, CareMate is powered by a blend of technologies that bring everything together seamlessly. Core Technologies and Architecture CareMate’s backend is built with Python, which supports the app’s processing tasks and links everything up smoothly. For reading text from images or PDFs, we use Tesseract OCR. This helps the app pull out medication names from photos or documents users upload, even if they’re handwritten or complex lists. To handle PDF documents, we rely on PyMuPDF, which is great for extracting content from files without too much hassle. How Google AI Powers CareMate At its core, CareMate uses Google’s Gemini AI models to give users the most accurate advice. There are two main AI functions at work:

  1. Identifying Medications: When text is extracted from an image or PDF, we send it to the first Gemini model, which specializes in spotting medication names. This helps sort out the text and pick out the relevant details, so users don’t have to manually enter each medication.
  2. Checking for Interactions: Once the meds are identified, the app sends them to a second Gemini model, which provides insights into possible interactions with other drugs, foods, or drinks. It’s like having a virtual pharmacist in your pocket, giving tailored advice based on recognized pharmaceutical knowledge. How It All Works Together Here’s the process in simple terms:
  3. Input: Users provide medication info through text, an image, or a PDF.
  4. Text Extraction: If it’s an image or PDF, we use OCR or PyMuPDF to grab the text.
  5. Medication Identification: The Gemini model identifies any medication names in the text.
  6. Interaction Check: Another model reviews the identified meds for interactions.
  7. User Report: CareMate then shows a summary to help users make safe choices. Why Gemini AI? Google’s Gemini models bring top-notch language processing to CareMate, ensuring each interaction check is accurate and thorough. This way, users can feel confident about the advice they receive without needing extensive medical knowledge.

Challenges we ran into

This was our first hackathon and experience working with the Gemini API, and we were genuinely impressed by its capabilities. However, the journey wasn’t without its hurdles. One of the main challenges we encountered was Gemini rejecting some of our prompts, labeling them as "underqualified." Initially, this was frustrating, but with guidance from Ms Bailey, we found creative solutions. By refining our approach, we used more detailed, step-by-step instructions to guide the model through complex tasks. We also discovered that feeding Gemini API with PDFs of pharmaceutical material enhanced its accuracy, allowing it to make well-informed, professional assessments. Additionally, handling text extraction from images and PDFs was tricky at first, requiring us to integrate Tesseract OCR and PyMuPDF effectively. Ensuring that extracted medication names were accurately recognized by Gemini was another key challenge we had to overcome. Each step of fine-tuning the prompt and integrating these tools taught us valuable lessons in managing AI-powered processes. We also ended up using a few different models of gemini which caused us to run under some very silly errors like getting our api quota exhausted, but we were able to work around this to ensure our output was smooth.

Accomplishments that we're proud of

we were able to teach ourself to incorporate Apis into our code. We were also shocked at how fast we were able to solve a real world problem.

What we learned

We learnt the fundamentals of using APis and we were also able to realize the fact that Ai in general can really help cut cost, manpower and overall time required to produce.

What's next for CareMate

In every handheld device in the world. We want to make sure the numbers change and no one has to die to due to lethal med interactions.

Built With

Share this project:

Updates