Inspiration

What it does

ClariWeaveAI combines the empathy of a counselor with the organization of an executive assistant. It uses a Live Video/Audio Uplink to detect stressors (like a messy desk or an anxious tone) and coordinate a mesh of specialized agents - The Weaver, The Analyst, and more to guide you toward focus through a reactive holographic interface.

How we built it

  • Core: Gemini 2.0 Flash (Native Audio) via Google ADK.
  • Backend: FastAPI (Python) hosted on Google Cloud Run.
  • Frontend: React/TypeScript with Framer Motion for the reactive hologram.
  • Special Feature: A custom Mind Mesh visualizer showing agentive collaboration in real-time.

Challenges we ran into

Aligning browser PCM audio pipelines (Float32) to the strict 16kHz Int16 requirements of the Live API and synchronizing REST media analysis with real-time WebSocket streams.

Accomplishments that we're proud of

Implementing a non-verbal emotional feedback loop via the hologram and the transparent "Mind Mesh" neural visualizer.

What we learned

Gemini's multimodal power allows for "Proactive Empathy" identifying hidden environmental stressors before the user even articulates them.

What's next for ClariWeaveAI

IoT integration for physical room grounding and biometric wearable connectivity. Predictive analytics of user stress level and actionable recommendations.

Built With

Share this project:

Updates