About the Project

In this section, I'll share what inspired MediSwift, what I learned, how the project was built, and the challenges we encountered.

Inspiration

A few days ago, I (Nirman Khadka - One of the Team Member) experienced a bacterial infection on my face and scheduled a doctor’s appointment at the health center. However, I had to wait two days, and by the time I saw the doctor, my infection had improved on its own, making the visit feel unnecessary. The appointment still cost me $30 (around 4,000 in my country’s currency). This experience highlighted a gap in timely, affordable healthcare. Despite insurance, accessing care was slow and costly, often leading to irrelevant appointments. I began to think, "What if healthcare services could be more accessible and affordable by automating parts of the treatment process?" This inspired the idea for MediSwift—a way to make healthcare efficient and cost-effective, especially for students and people living abroad, by using resources from countries with lower doctor costs and automating much of the patient journey.

What It Does

MediSwift is an AI-powered, large language model (LLM)-based chatbot assistant that acts like a personal health expert. Patients input their symptoms into the chatbot, which asks follow-up questions to gather details. The AI then generates a preliminary diagnosis report, which is sent to a doctor on the platform. The doctor can review, approve, or adjust the diagnosis, recommend additional tests, or suggest in-person follow-ups if needed. The app also handles digital prescriptions and follow-ups, with reminders for medications and routine checks based on patient preferences.

How We Built It

  • Frontend: Built using Next.js.
  • Backend: Developed on Django.
  • AI Model: We fine-tuned an LLM model with disease-specific datasets from Kaggle, saved to MLflow for deployment. Due to limited time (24 hours), we designed the fine-tuning architecture using Transformers, but as full training would take around 57 hours, we used prompt engineering with GPT-based models like LLaMA-3 in the interim.

Challenges

Our first major challenge was hardware limitations. My laptop couldn’t handle the model fine-tuning, and Google Colab’s free tier wasn’t sufficient. We optimized the fine-tuning setup using PERT and LoRA, but it still required 57 hours to train. We attempted to use Intel’s AI resources, but due to restricted internet access, we couldn’t connect. Additionally, inconsistent internet speeds (1 Mbps or slower) further hampered progress, adding frustration but also testing our resilience.

Accomplishments

Despite these challenges, we’re proud of creating a functional MVP within 24 hours. From idea formation to building core features, we made solid progress across backend, frontend, and AI. Despite frustrations with limited resources, we managed to deliver a working prototype with our main functionalities.

What We Learned

The power of teamwork was invaluable. With four team members—UI/UX, two frontend developers, and one backend/AI developer—we tackled problems collectively. Every challenge, from ideation to execution, was resolved faster and with more creativity due to our collaborative efforts.

What’s Next for MediSwift

Our next steps are to refine the MVP and integrate a highly accurate LLM fine-tuned with extensive health data on as many diseases as possible. Within three months, we plan to release a polished prototype and conduct real-world testing. Given the progress we made in just 24 hours, we’re confident we can make substantial strides toward transforming healthcare accessibility.

Built With

Share this project:

Updates