Inspiration Our project was inspired during a hardware-focused hackathon where we explored real-time sensor-based healthcare systems. While discussing neonatal care, we recognized that many newborn complications go unnoticed due to the lack of continuous monitoring. One of our teammates shared her neonatal experience with jaundice, which highlighted how early detection and continuous observation could significantly improve outcomes. This motivated us to build an AI-assisted neonatal monitoring system that supports early risk detection and timely intervention.
What it does The Smart Neonatal Health Monitoring System continuously monitors a baby’s motion, breathing patterns, and cry behavior using camera and audio inputs combined with AI models. It analyzes patterns in real time and provides alerts and risk indicators through a dashboard. The system assists caregivers and medical staff by offering early warning signals for possible health concerns instead of relying only on periodic manual checks.
How we built it We built the system using an AI-powered approach combining computer vision, audio analysis, and pattern recognition. The frontend dashboard is developed in React for real-time visualization. The backend uses Python-based AI models for motion and breathing pattern detection. We integrated video and audio capture modules and designed a modular architecture so multiple detection models can run together and send results to a unified monitoring dashboard.
Challenges we ran into We faced challenges in extracting stable breathing and motion signals from video data, handling noise in audio-based cry detection, and synchronizing multiple AI outputs into a single dashboard view. Another challenge was designing a system that is accurate yet lightweight enough to run in near real time for a hackathon prototype.
Accomplishments that we’re proud of We successfully developed a working UI dashboard and implemented a functional breathing and motion pattern detection module. We created a multi-parameter monitoring concept instead of a single-signal system, which makes the solution more practical. We also designed a scalable architecture that can support additional neonatal health indicators in the future.
What we learned We learned how to combine AI models, sensor-style inputs, and dashboards into a healthcare-focused monitoring solution. The project strengthened our skills in computer vision, audio processing, real-time data handling, and system design for medical use cases. We also learned how to translate a real-world healthcare problem into a technical AI solution.
What’s next for Smart Neonatal Health Monitoring System Next, we plan to improve model accuracy, add infection risk prediction using AI pattern analysis, integrate cloud-based reporting, and include multilingual alert summaries for caregivers. We also aim to connect wearable sensors and hospital devices to make the system more reliable and deployment-ready for real neonatal care environments.
Built With
- audio-signal-processing-libraries-apis-&-ai-services:-google-gemini-api-(for-intelligent-analysis-and-report-generation)
- javascript
- modern-ui-component-libraries-backend:-fastapi-(python)
- pandas-cloud-&-platform-(planned-/-integrated):-aws-cloud-services-for-model-hosting-and-data-processing-streaming-&-communication:-http/rest
- real?time
- recharts-(for-visualization)
- rest-apis-ai/ml-&-cv:-opencv
- sensor/video/audio
- tensorflow-/-pytorch-(for-pattern-detection)
- typescript-frontend:-react.js
- visualization:
- we-used-the-following-technologies-in-our-project:-languages:-python
- web-audio-api-(for-audio-capture-&-processing)-data-processing:-numpy
Log in or sign up for Devpost to join the conversation.