Inspiration

The inspiration for AURA WELLNESS stems from the critical, often overlooked crisis of burnout among healthcare professionals. Doctors, nurses, and paramedics operate in high-pressure environments where chronic stress is the norm, yet they often lack personalized, accessible mental health support. We wanted to move beyond generic "meditation apps" to create a system that truly understands the neuro-behavioral patterns of medical workers—a "Cognitive Digital Twin" that doesn't just track data, but predicts emotional fluctuations and intervenes before burnout occurs.

What it does

AURA WELLNESS is a comprehensive, AI-driven mental health platform. Cognitive Digital Twin: It analyzes user logs (stress, sleep, events) to build a psychological profile, predicting future stress levels and identifying personality traits. Burnout Prediction: Using a simulated Explainable AI (XAI) model, it calculates a burnout risk score based on work hours, sleep, and heart rate, breaking down exactly why the user is at risk. Multimodal AI Assistance: It features a medical symptom analyzer (vision), a document scanner/translator, and a "Calm Space" art generator that creates soothing images on demand. Active Interventions: From an AI-powered "Aura Buddy" chatbot that speaks multiple languages to generated bedtime stories and cognitive reframing tools, it actively helps users manage their state of mind.

How we built it

We built AURA WELLNESS using React and TypeScript for a responsive, glassmorphic frontend, powered by the Google Gemini API (@google/genai) as the core intelligence engine. Gemini 2.5 Flash handles the heavy lifting for real-time chat, logic simulation, and multilingual support. Gemini 3 Pro Preview is utilized for high-reasoning tasks like analyzing medical images for symptom assessment. Gemini 2.5 Flash Image powers the generative art features for the "Calm Space." Gemini TTS brings stories and chat responses to life with natural-sounding speech. We used Recharts for visualizing complex wellness data and Tailwind CSS for the soothing, immersive UI.

Challenges we ran into

One of the biggest challenges was Prompt Engineering for Empathy and Safety. Ensuring the AI acts as a supportive "Cognitive Twin" without being robotic, while strictly adhering to safety protocols during crisis situations, required extensive tuning of system instructions. Another hurdle was Simulating Predictive Models. Since we didn't have a backend with years of historical data, we had to creatively use Gemini to simulate a Random Forest classifier for the burnout predictor, feeding it logic to generate realistic, explainable risk assessments based on user inputs.

Accomplishments that we're proud of

We are incredibly proud of the "Cognitive Twin" architecture. The app feels alive; it doesn't just store data, it interprets it to give users insights into their own personality and emotional forecasts. We're also proud of the deep integration of Multimodal AI. Features like the "Symptom Analyzer" and "Scan & Translate" seamlessly blend computer vision with healthcare utility, making the app useful for both the user's mental state and their professional workflow.

What we learned

We learned that Multimodal LLMs are reasoning engines, not just text generators. We were able to offload complex logic—like calculating burnout risk factors or interpreting dream symbolism—directly to the model, drastically simplifying our code. We also learned the importance of Contextual Grounding; by feeding the AI the user's recent stress logs and sleep data, the chat experience transformed from a generic conversation into a highly personalized therapy-like session.

What's next for Aura Wellness

The next step is hardware integration. We plan to connect AURA to wearable devices (smartwatches) to pull real-time heart rate and sleep data, replacing manual entry with live biofeedback. We also aim to expand the "Live Meter" into a fully voice-interactive therapy session using the Gemini Live API, allowing users to talk through their stress while driving home from a shift. Finally, we want to build out the Federated Learning aspect, allowing the AI to learn from a user's specific patterns directly on their device without compromising privacy.

Built With

  • geolocation-api
  • google-gemini-api-(@google/genai)
  • html5
  • local-storage
  • lucide-react
  • mediadevices-api
  • notification-api
  • react
  • recharts
  • tailwind-css
  • typescript
  • web-audio-api
Share this project:

Updates