MAIA — Maternal AI Assistant
Inspiration
Pregnancy and postpartum recovery can be overwhelming, especially for women who do not always have immediate access to medical guidance or emotional support. Many mothers experience anxiety, confusion about symptoms, and uncertainty about when to seek professional help.
Our team wanted to create an AI assistant that acts as a supportive first layer of guidance for maternal health. The goal was not to replace doctors, but to help mothers quickly get helpful information, reassurance, and guidance when they need it.
This led us to build MAIA (Maternal AI Assistant) — an AI-powered voice and chat assistant designed to support pregnant and postpartum women with safe, empathetic responses and guidance.
What it does
MAIA is an AI Doula assistant that provides:
- Conversational maternal health support
- Voice interaction using browser speech recognition
- AI-generated responses for pregnancy and postpartum questions
- Safety-first responses that encourage professional medical help when needed
- A clean and simple interface designed for accessibility and ease of use
Users can either type their questions or speak directly to MAIA, making the interaction feel natural and supportive.
How we built it
The application is built as a modern web application with an AI-powered conversational layer.
Core components include:
- React + Vite for the frontend interface
- TailwindCSS for UI styling
- Groq LLM API for fast AI responses
- Web Speech API for voice input and speech synthesis
- Firebase Hosting for deployment
The AI assistant uses a structured system prompt designed specifically for maternal care scenarios. This prompt ensures the AI:
- avoids diagnosing medical conditions
- provides calm and supportive guidance
- recommends seeking professional help in emergencies
The app supports both text chat and voice interaction, making it easier for users to communicate naturally.
Challenges we ran into
We encountered several interesting technical challenges during development:
Voice recognition issues
Browser speech recognition APIs behave differently across environments. We had to handle microphone permissions, browser compatibility, and recognition errors.
Preventing multiple AI requests
Speech recognition events sometimes trigger multiple results while a user is speaking. We implemented logic to only send final speech transcripts to the AI model.
API limits and reliability
We initially explored different AI providers and had to handle rate limits and fallback error handling to ensure the UI never froze during API responses.
Deployment configuration
Setting up hosting with Firebase and configuring the correct build output and hosting rules required additional adjustments.
What we learned
This project helped us learn a lot about:
- Designing AI assistants with safety-focused prompts
- Integrating LLM APIs into real-time applications
- Handling voice interfaces in web applications
- Deploying modern web apps using Firebase Hosting
- Collaborating as a team in a fast-paced hackathon environment
We also learned how important responsible AI design is when building systems that interact with sensitive health topics.
What's next for MAIA
Future improvements we would like to implement include:
- Multilingual maternal support
- AI-powered symptom triage guidance
- Image-based assistance for maternal health questions
- Integration with healthcare providers
- Personalized pregnancy timeline support
Our long-term vision is to make MAIA a reliable AI companion for maternal care that can support mothers around the world.
Built With
- api
- css
- firebase
- groq
- hosting
- javascript
- llama
- llm
- react
- tailwind
- vite
- webspeech
Log in or sign up for Devpost to join the conversation.