Inspiration
We were inspired by the growing youth mental health crisis and the critical need for early intervention. Children and teenagers often struggle to articulate their feelings to adults or human therapists. We aimed to create a safe, non-judgmental digital space where a child could communicate organically, through voice or text, with an empathetic AI friend. The goal is not to diagnose, but to provide a continuous, evidence-based stream of behavioral and emotional data to a parent or professional, allowing for informed, proactive support before minor issues escalate.
What it does
The Child Imitation Agent operates as a conversational companion that communicates in a child-friendly, non-clinical tone. It works by: Continuous Behavioral Analysis: The agent silently monitors every conversation turn for key emotional and psychological patterns. Fact and Memory Management: It extracts and stores key, long-term personal facts (e.g., favorite pet, interests) to personalize replies and build rapport. State-Aware Dialogue: If the child mentions keywords related to anxiety, depression, or distress, the agent enters a diagnostic mode to guide the conversation toward relevant topics using pre-defined, supportive questions based on pediatric psychiatric research. Structured Professional Reporting: When the parent requests a summary (via the dedicated endpoint), the agent generates a Pydantic-validated JSON report. This report includes a supportive message for the parent, a clinical-style summary for an analyst, and a list of potential concerns, enabling a seamless handoff to professional care.
How we built it
The agent is built on a modular, multi-transport asynchronous architecture: Core Intelligence (agent.py): This Python module contains the LLM, Pydantic validation schemas, safety analysis (analyze_for_escalation), and memory logic, powered by ASI:One LLM for reliable, structured output and powerful inference. Voice Pipeline (main.py, stt.py, tts.py):
Input: Real-time WebSocket connection handles raw audio. Deepgram transcribes the audio efficiently. Agentverse Integration (uagent_agent.py): The core intelligence is connected to the Agentverse network via the uAgent Framework. This file uses the standard ChatProtocol and the Agentverse Mailbox to become discoverable and interoperable with other agents and platforms, fulfilling the core ASI component requirement. Resilience: We used aiohttp with explicit timeouts and robust exception handling across all API calls (Deepgram, ElevenLabs, ASI:One) to ensure stability against network failures and external service latency.
Challenges we ran into
External API Resilience: The biggest challenge was stabilizing the real-time voice pipeline against network failures. We repeatedly hit errors like getaddrinfo failed, SLOW_UPLOAD (Deepgram timeouts), and transient API connection errors. Bridging Frameworks: Integrating the high-performance FastAPI/ASGI server (for WebSockets) with the uAgent framework (for the Agentverse network) required careful separation of processes and ensuring they ran on different ports without conflict. Pydantic Validation: While powerful, forcing the LLM to output a complex, strictly validated JSON (ParentSummary schema) required precise system prompting and cleaning logic to handle inevitable formatting errors from the mode
Accomplishments that we're proud of
Full Agentverse Integration: Successfully registering and running a custom-logic agent on the Agentverse network using the uAgent framework and the ChatProtocol, making it fully discoverable on the Almanac. Deterministic Safety Guardrails: Implementing a fast, keyword-based safety analysis that runs alongside the LLM's response, providing an immediate, deterministic alert system for severe distress signals. Structured Professional Output: The ability to generate a clinical-style, Pydantic-validated JSON report for a human analyst, transforming raw conversation data into actionable insights for mental health professionals.
What we learned
We gained deep expertise in managing simultaneous asynchronous network requests in Python, specifically: Network Resilience: Learned how to use aiohttp.ClientTimeout and manage Content-Type headers for robust binary data transfer in real-time APIs (Deepgram/ElevenLabs). ASGI Architecture: Understood the strict requirements of the ASGI standard, especially the WebSocket handshake (await ws.accept()), and how global dependency failures can destabilize the server. Decentralized Communication: Mastered the use of the uAgent framework to establish a permanent, discoverable identity (AGENT_SEED) and communicate securely across the decentralized Agentverse network.
What's next for our Child Imitation Agent For Mental Health Analysis
Long-Term Trend Analysis: Implement a time-series model on the captured conversation facts and emotional data to detect gradual, subtle shifts in the child's mood that may indicate chronic issues like escalating depression or increasing isolation. Tool/Function Calling: Integrate external "tools" into the agent (e.g., a Mood Tracker Agent or a Resource Finder Agent) through the Agentverse to provide the child with targeted, interactive activities or external resources during the conversation. Custom Voice Model: Train a custom, high-quality ElevenLabs voice model tailored specifically to a child persona to enhance empathy and make the interactions even more comforting and natural. Extended Mental Health Knowledge: Add on to the list of mental health struggles that could be analyzed by the agent. Real Time Voice Conversation: Integrate smoothly flowing conversation by letting the user speek out loud to the agent, and getting a realistic audio response.
Built With
- agentverse
- aiohttp
- asgi
- asi:one
- css
- deepgram
- elevenlabs
- html
- javascript
- json
- openai
- pydantic
- python
- python-dotenv
- uagents

Log in or sign up for Devpost to join the conversation.