About the Project
Inspiration
The idea for CryFlow came from a very personal experience.
While helping take care of my nephew, I realized that no two babies cry for the same reasons in the same way.
Even with the same baby, what works one night may not work the next. Caregivers often rely on memory, intuition, and trial-and-error to figure out whether a baby is hungry, uncomfortable, or simply needs emotional comfort.
This made us question:
Why do most AI systems respond the same way every time, when caregiving itself is inherently adaptive?
We wanted to build an agent that remembers what worked before, adapts to each baby’s patterns, and improves its behavior over time — just like a real caregiver does.
What We Built
CryFlow is an autonomous, continually learning care agent.
Instead of reacting with fixed rules, the agent:
- Observes contextual signals (e.g. time of day, recent care events)
- Infers the likely cause of distress using a belief state
- Takes action autonomously
- Learns from outcomes to improve future decisions
For example, the agent may infer that night-time crying is most likely due to hunger with a probability of 60%, select feeding as the best action, and increase its confidence if the outcome is successful.
The core focus of this project is agent behavior and learning, not UI.
How We Built It
CryFlow is implemented as an agent-first backend system with an explicit learning loop:
Observe → Interpret → Decide → Act → Observe Outcome → Learn
Key design choices:
- A belief state that represents probabilistic causes of crying (e.g. hunger, emotional comfort, discomfort)
- A persistent memory that tracks action attempts and success rates
- Decision-making that combines what is likely (belief) with what worked before (experience)
- Outcome-driven updates that allow the agent to improve over repeated runs without retraining models or modifying prompts
The system is intentionally lightweight and explainable, making the agent’s internal reasoning visible and auditable.
Challenges We Faced
Defining “learning” without model retraining
We focused on continual learning through memory, statistics, and belief updates rather than online fine-tuning.Separating persona from intelligence
Early prototypes emphasized voice and UI. We learned that true intelligence comes from behavior change over time, not presentation.Designing memory that reflects experience, not logs
Instead of storing raw data, we structured memory around beliefs and outcomes to make learning explicit.
What We Learned
- Continual learning does not require complex models — it requires clear feedback loops
- Agents feel “alive” when their behavior changes across interactions
- Owning the agent architecture is more important than relying on black-box tools
CryFlow reinforced our belief that the future of AI assistants lies in adaptive systems that learn from experience, not static responses.
Built With
- Python — core agent implementation
- JSON — persistent memory and belief state
- Composio — agent action execution layer (stubbed for hackathon)
- You.com API — external grounding for contextual reasoning (stubbed)
- Akash Network — target deployment platform for long-running autonomous agents
Built With
- akash
- composio
- python
- you
Log in or sign up for Devpost to join the conversation.