Inspiration
Our inspiration came from observing how confusing it can be for everyday users to interpret medical information. From unreadable prescription labels to uncertainty about symptoms — we realized that a simple AI assistant could make a meaningful impact.
We wanted to build something that doesn’t replace doctors, but empowers people with preliminary knowledge and awareness about their health.
What it does
MediScan AI is an all-in-one AI-powered health assistant that helps users analyze medicines, check symptoms, and access first-aid guidance instantly. It leverages Gemini AI’s multimodal capabilities to interpret both text and images, making healthcare more accessible, accurate, and user-friendly.
Users can: • Scan medicines to identify their name, dosage, uses, and side effects. • Check symptoms to understand possible causes and recommended next steps. • Access step-by-step first-aid instructions for common emergencies.
MediScan AI serves as a 24/7 digital health companion, bridging the gap between medical awareness and accessibility.
How we built it
We built MediScan AI using: • Frontend: Vite, React, and Tailwind CSS for a fast, responsive interface. • AI Backend: Gemini API for both text and image analysis. • OCR Module: Extracts and interprets medicine text from uploaded images. • Data Handling: Structured JSON output for consistent and clear responses.
The workflow: 1. The user uploads a medicine image or describes symptoms. 2. Gemini processes the input through text and vision models. 3. The system returns structured insights, including medicine details, symptom analysis, or first-aid steps.
Challenges we ran into
• Ensuring OCR accuracy with low-quality or blurry images. • Managing response latency during image analysis. • Handling limited access to verified medical databases. • Balancing helpful AI responses with ethical boundaries in healthcare.
Accomplishments that we're proud of
• Developed a fully functional multimodal AI healthcare assistant within hackathon constraints. • Integrated Gemini’s text and vision capabilities effectively. • Designed a clean, accessible, and user-friendly interface. • Built a scalable architecture ready for future healthcare integrations.
What we learned
• How to integrate text and image AI models into a single application.
• How to handle OCR and structured data parsing efficiently.
• The importance of designing intuitive and clear health-related user interfaces.
• The ethical considerations and limitations of using AI in healthcare contexts.
What's next for Mediscan-Ai
We plan to expand MediScan AI into a complete AI healthcare ecosystem with: • Voice assistant capabilities for hands-free use. • Health reminders and medication tracking. • User health history for personalized insights. • Integration with wearable devices for real-time data analysis. • Multilingual support for broader accessibility. • A verified “Doctor Connect” feature for professional consultations.
MediScan AI aims to make smart, safe, and accessible healthcare available to everyone.
Log in or sign up for Devpost to join the conversation.