π‘ Inspiration
In many rural and underserved regions, access to timely, affordable, and continuous healthcare remains a serious challenge. We envisioned a platform that integrates AI, IoT, and immersive tech to deliver personalized, predictive, and human-centered care β from symptom checking to remote diagnosis, continuous health monitoring, mental health support, and physical recovery.
Inspired by the UN SDG Goal 3: Good Health & Well-being, SmartCare+ aims to bridge the healthcare divide using a full-stack, modular approach built on privacy-first, scalable technology.
βοΈ What it does
SmartCare+ is an end-to-end virtual health ecosystem designed to provide intelligent, personalized care through five core modules:
AI Symptom Checker (Mixtral-8x7B LLM) Users can input symptoms via voice or text (multilingual supported); Mixtral analyzes the input and provides possible diagnoses, severity ratings, and care suggestions.
Telemedicine Portal Secure video consultations, smart appointment scheduling, encrypted chat, auto-generated visit summaries, and doctor matching based on location via Maps API.
IoT-Based Health Monitoring Dashboard Real-time vitals tracking (heart rate, SpOβ, temperature) using ESP32 + MAX30100, streamed via ThingSpeak API, visualized on a unified dashboard with alert thresholds and trend predictions.
Mental Health Mood Tracker Users complete a short questionnaire or allow webcam-based facial expression analysis (via MediaPipe + Mixtral). The system generates a dynamic mood profile, key indicators (e.g., anxiety, sadness), and offers therapy resources or the option to connect with a mental health expert.
VR-Based Physical Therapy Module AR/VR-guided recovery exercises with in-browser pose estimation and real-time motion correction, gamified for increased user engagement.
All modules are accessible via a single dashboard with built-in user authentication, offline-first design, and privacy-aware local processing. Email notifications are triggered for critical alerts (e.g., irregular vitals or flagged mental health risk) using SendGrid API, and emergency services or family members can be notified with current location via Maps API integration.
π οΈ How we built it
- Frontend: React.js + Tailwind CSS for responsive, mobile-friendly design
- Backend: Node.js with Express.js for modular REST API architecture
- LLM Integration: Used Mixtral-8x7B hosted locally via Ollama for symptom understanding, mood interpretation, and care recommendations
- Voice/Language Support: Google Cloud Speech-to-Text + Translate API for multilingual symptom entry
- IoT Data: ESP32 with MAX30100 sensor pushing data to ThingSpeak API every 5 seconds; data pulled into the dashboard with Axios + Firebase sync
- Mental Health: Mood tracker built using Mixtral-based scoring with sentiment + facial cues; recommendations generated in real time
- Telehealth Features: WebRTC for video calls, Socket.io for secure chat, MongoDB for logs and appointment management
- VR Rehab: WebXR and MediaPipe pose detection with gamified feedback for upper-limb therapy
- Security: JWT authentication, AES encryption for health logs
- Email & Notification Services: SendGrid for critical health alerts, password recovery, appointment reminders
- Maps API: Google Maps API to visualize nearest hospitals or therapists, share location during SOS trigger
- Hosting: Vercel (frontend), Render (backend APIs), Firebase (auth & real-time sync)
π§± Challenges we ran into
- Achieving smooth and fast bi-directional IoT communication with ThingSpeak under low-bandwidth conditions.
- Tuning Mixtral prompts to ensure high-quality, medically coherent responses.
- Implementing real-time sentiment analysis and facial emotion detection efficiently in-browser.
- Ensuring cross-device compatibility for WebXR-based physical therapy sessions.
- Handling sensitive health data securely while preserving performance and offline usability.
π Accomplishments that we're proud of
- Delivered a multi-modal healthcare assistant integrating LLM, IoT, XR, and real-time video tech into a cohesive platform.
- Enabled mental health risk detection and escalation, integrated with therapy resources and mood history logs.
- Achieved <3s latency from sensor-to-dashboard using ThingSpeak + Firebase hybrid push-pull architecture.
- Successfully ran Mixtral-8x7B inference on-device using Ollama + prompt templating for medical NLP.
- Built a VR rehab demo thatβs playful yet therapeutic, with real-time motion accuracy scoring.
π What we learned
- Leveraging LLMs like Mixtral requires thoughtful prompt engineering, especially in medical applications.
- Orchestrating a privacy-first healthcare pipeline involves not only encryption, but mindful data flow and offline-ready logic.
- Building for low-connectivity users means prioritizing sync logic, caching, and fail-safe UI states.
- A well-integrated system (vs isolated features) offers real-world value and enhances user experience.
π What's next for SmartCare+
- Expand the AI engine to support chronic care prediction models for hypertension, diabetes, and cardiovascular conditions.
- Add FHIR/HL7 integration to connect with real-world hospitals and EHR systems.
- Improve mental health tracker with daily journaling prompts and resilience building recommendations.
- Deploy via Raspberry Pi-based smart health kits in remote clinics with NGO partnerships.
- Launch a WhatsApp-based AI bot for users without smartphones, backed by same Mixtral logic.
- Containerize backend APIs using Docker + Kubernetes for scalable deployment across regions.
- Integrate voice-enabled avatar assistant for elderly and low-literacy patients in regional languages.
Built With
- api
- artificialintelligence
- cloud
- database
- deep
- gemini
- learning
- llm
- machine-learning
- mongodb
- react
Log in or sign up for Devpost to join the conversation.