Inspiration
Millions of people lack immediate access to medical professionals for quick diagnosis or health guidance. In critical or remote situations, even a basic understanding of symptoms can save lives or prevent complications. We wanted to create a solution that bridges this gap using AI.
What it does
Med-AI is an intelligent health assistant that allows users to:
- Speak symptoms using voice input.
- Upload medical images (like scans or reports).
- Receive AI-powered suggestions for possible diagnoses.
It empowers users to take the first step toward understanding their health condition and knowing when to seek professional help.
How we built it
We developed Med-AI using a React frontend and Node.js backend, integrating an AI image analysis module for diagnosis support. Voice input is handled with Web Speech API, and image recognition leverages a lightweight machine learning model trained on medical datasets.
Challenges we ran into
- Interpreting natural symptom descriptions in varying formats.
- Handling diverse image inputs and ensuring accuracy.
- Ensuring a user-friendly experience while keeping the app responsive and privacy-conscious.
What we learned
- Designing for accessibility in healthcare solutions.
- Integrating AI models responsibly into public-facing apps.
- Optimizing performance while maintaining security for sensitive inputs.
What's next
- Improving diagnostic accuracy with larger datasets.
- Adding multilingual support.
- Partnering with certified professionals to verify medical advice.
Built With
- ai
- css
- elevenlabs
- framer-motion
- gemini
- git
- groq
- llms
- machine-learning
- node.js
- router
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.