Inspiration

Building multi-agent systems is still slow and brittle. We wanted a visual, MCP-native way to go from idea → agent in minutes.

What it does

Hiveflow is a visual orchestrator for AI agents. On a drag-and-drop canvas you connect models, tools, and APIs with live previews. We added MCP connectors for Redis (KV/cache/pub-sub) and Minimax (LLM/voice).

How we built it

• Node-based canvas with a DAG scheduler; streamed execution node-by-node.
• Hiveflow MCP server so assistants (e.g., Claude/Cursor) can list, run, and monitor flows.
• Observability: traces, latency, token usage, retries.

Challenges we ran into

• Tool/schema drift across functions.
• Rate limits & backpressure in parallel branches.
• Secret management and environment hygiene.
• Voice streaming jitter (barge-in, partials).

Accomplishments that we’re proud of

• End-to-end flow triggered from an assistant via MCP.
• New MCP tools: redis and minimax working in real demos.
• Demo-ready multi-agent workflows built in minutes.

What we learned

• MCP turns assistants into operators of flows, not just users.
• UX + observability speed iteration more than “one more model.”
• Short-lived, scoped credentials are non-negotiable.

What’s next for Hiveflow

• More MCP tools (Notion, Slack, DBs) and verticalized templates.
• Team collaboration, versioning, and hosted SaaS + SDK.
• Deeper memory via Redis patterns and built-in eval harnesses.

Built With

Share this project:

Updates