Inspiration
Most AI agent demos stop at a browser tab or a polished landing page. Our target users, however, live in QQ groups, private chats, and channels every day. We wanted Gemini-powered OpenClaw agents to work where Chinese communities actually coordinate, share files, and make decisions — not just in a sandbox.
At the same time, shipping a QQ integration that survives real traffic is harder than it looks. A simple bot is easy; a channel that can deal with flaky gateways, long replies, retries, nested context, and real operators is not. So we built OpenClaw QQ as the production bridge between OpenClaw agents and QQ.
What it does
OpenClaw QQ is a production-ready QQ channel plugin for OpenClaw built on the OneBot v11 ecosystem. It connects OpenClaw to NapCat/Lagrange-compatible QQ gateways and supports:
- private chats, group chats, and QQ guild/channel messaging
- mention-based triggering plus allow/deny controls
- recursive reply and forward parsing so the agent sees more of the real conversation
- self-healing connection management with exponential backoff
- retry queues, fast-fail handling, and model failover to reduce dropped replies
- interruption and debounce controls so new messages can preempt stale responses
- long-reply forwarding and merging for better delivery in real QQ conversations
In short: it helps an OpenClaw agent feel less like a toy demo and more like an operator that can actually survive in the wild.
How we built it
We built the plugin in TypeScript as a native OpenClaw channel extension. On the transport side it speaks OneBot v11 over WebSocket, validates and normalizes events, and maps QQ conversations into OpenClaw sessions. On the interaction side, we added context injection, queue management, retry logic, and configurable safety and ops controls so developers can start with a minimal setup and then progressively turn on more advanced capabilities.
We also invested in docs because infra only matters if people can actually deploy it. The project ships with a 3-minute quickstart, a config reference, advanced docs, and a dedicated NapCat deployment guide.
Challenges we ran into
The hardest part was not just connecting QQ — it was making the channel trustworthy:
- avoiding reconnect storms and duplicate starts
- recovering cleanly from send failures and stale sockets
- preserving meaningful context from nested replies and forwarded messages
- reducing cases where logs say a reply was sent but nothing lands in QQ
- balancing simple defaults for beginners with knobs advanced operators actually need
Messaging platforms are full of edge cases, and QQ is no exception.
Accomplishments that we're proud of
We're proud that OpenClaw QQ goes beyond a thin adapter layer. It already offers production-oriented reliability features, structured docs, progressive configuration from beginner-friendly to high-control, and an active open source surface with real iteration and feedback.
Most importantly, it makes Gemini and OpenClaw agents usable inside a platform that a huge number of Chinese users already rely on daily.
What we learned
We learned that agent infrastructure wins or loses on reliability, not on flashy prompts. Good defaults matter. Observability matters. Retry behavior matters. And when an agent enters a busy group chat, context handling matters a lot more than you'd expect.
What's next for OpenClaw QQ
Next, we're continuing to improve:
- multi-account and more complex deployment scenarios
- observability and debugging ergonomics
- media and browser-output delivery reliability
- support for larger, faster-moving conversations
- even smoother setup for self-hosted AI agents powered by Gemini and OpenClaw
Our bet is simple: the future of AI agents is not just smart models — it's smart models that can actually reach the chats people already use.
Project repo: GitHub Documentation: GitHub Pages docs
Built With
- napcat
- node.js
- onebot-v11
- openclaw
- typescript
- websocket
- zod
Log in or sign up for Devpost to join the conversation.