🧠 IMitate: AI-Powered Medical Education Assistant

Inspiration

This project was inspired by the challenges medical students face when transitioning from textbook learning to real clinical environments.
The lack of accessible, realistic patient simulations motivated us to build a tool that mimics human-like patient responses using AI,
making early medical education more interactive and immersive.

What it does

Frontend: Built with React and Tailwind CSS, with UI components from shadcn/ui.
Voice interactions are supported using React Speech Recognition.

Backend: Powered by FastAPI, handling all API requests, patient record retrieval, and LLM prompt formatting.

Database: Uses SQLite to store sample patient data including symptoms, personalities, and medical histories.

LLM Integration: Sends reformatted clinical prompts to Google Gemini API to simulate patient behavior in real-time.

How we built it

  • Created a FastAPI backend with endpoints for loading patients and communicating with the Gemini API.
  • Designed a patient database schema and seeded it with realistic sample data in SQLite.
  • Engineered context-aware prompts that help the LLM act as a believable virtual patient.
  • Built a responsive chat interface in React using Tailwind CSS and shadcn/ui components.
  • Added voice recognition using React Speech Recognition for immersive interaction.
  • Handled state syncing between user input, loading indicators, voice toggle, and bot responses.

Challenges we ran into

  • Patient Context Formatting: Ensuring the LLM stayed in character required fine-tuned prompt engineering, especially when users asked unpredictable questions.
  • Frontend-Backend Sync: Dealing with async requests and mapping database rows to readable formats (objects vs arrays) was tricky.
  • Voice Input Handling: Integrating live voice-to-text recognition with chat message flow presented timing and UX challenges.

Accomplishments that we're proud of

  • Built an end-to-end system that mimics real clinical interviews using a modern AI stack.
  • Created a flexible chat interface that supports both text and voice input.
  • Successfully formatted prompt templates that keep the LLM in role as a patient.
  • Delivered a seamless, real-time interaction between the frontend, backend, and Gemini API.

What we learned

  • How to structure prompts for large language models to maintain character consistency (e.g. acting like a specific patient).
  • How to manage and format context-sensitive data to guide AI interactions.
  • Building stateful chat experiences in React and syncing them with asynchronous API responses.
  • Managing cross-origin communication between frontend and backend (CORS), and working with SQLite in Python.

What's next for IMitate

  • Add support for case scoring and feedback to help users assess their diagnostic reasoning.
  • Create a library of patient templates categorized by system (e.g., cardio, neuro, GI).
  • Allow educators to author cases and assign them to students via a dashboard.
  • Improve the LLM prompt system to support follow-up questions and branching paths.
  • Implement user login and session tracking for long-term skill progression.

Built With

  • fastapi
  • gemini
  • nextjs
  • supabase
  • tailwind
Share this project:

Updates