Inspiration
We've all been there. Sitting in a 300-person lecture, confused about something the professor just said, but too anxious to raise a hand. Or in a company all-hands where the CEO asks "any questions?" and the room goes silent — not because there are no questions, but because no one wants to be the one to ask.
This silence has real consequences. Students fall behind and don't recover until exam results make it obvious. Employees disengage because their concerns feel invisible. Community members leave townhalls feeling unheard.
We built AskSafe because we believe the best questions are the ones people are afraid to ask — and technology should make it safe to ask them.
What it does
AskSafe is a real-time anonymous Q&A platform for any one-to-many setting:
For participants:
- Join with a 6-character code, verify you're human via World ID (no personal data stored)
- Ask questions anonymously by typing or whispering (ElevenLabs voice-to-text)
- Rate your confusion with a single tap when the host checks in
- Upvote question clusters that matter to you
- Get a post-session email summary of what was covered
For hosts:
- See a live dashboard with animated confusion gauge, participant count, and question count
- View a per-slide confusion timeline that shows exactly when the room got lost
- AI clusters similar questions into topics with rich summaries — address confusion, not chaos
- Four response options: AI-drafted answer (editable), share a link/resource, flag for later, or custom response
- End-of-session report with AI insights, confusion spikes, resolution rate, and student feedback
For organizations:
- Works for university lectures, company townhalls, board meetings, community forums, training sessions
- Proof-of-human prevents bot spam and ensures 1-person-1-vote integrity
- Post-session analytics help improve future sessions
- AI agents available via ASI:One chat and OmegaClaw for on-the-go access
How we built it
- Frontend: Next.js 16, Tailwind CSS, shadcn/ui, Recharts, Socket.IO client
- Backend: Python FastAPI, Socket.IO server, Motor (async MongoDB)
- AI Agents: 3 Fetch.ai agents on Agentverse with Chat Protocol — Confusion Monitor, Question Clustering, Insight Report. Same logic powers both ASI:One chat and the app backend.
- AI: Google Gemini for semantic clustering, explanations, reports, email summaries, and feedback filtering
- Identity: World ID IDKit v4 with server-side RP signing for proof-of-human verification
- Data: MongoDB Atlas with 7+ collections and aggregation pipelines
- Voice: ElevenLabs Scribe API with Web Speech API fallback
- Email: Resend for post-session student summaries
- Real-time: Socket.IO for live confusion updates, question counts, cluster broadcasts, upvotes
- OmegaClaw: Integrated via Telegram — professors can query session analytics from their phone
Challenges we ran into
- World ID v4 requires RP signatures generated server-side — built a Next.js API route using the official signing utility
- IDKit WASM binary needed manual copying to the Next.js public directory
- OmegaClaw Docker required WSL which crashed on Windows — had to reinstall WSL from scratch
- Agentverse agents use the uAgents protocol (not REST) — integrated agent logic directly into the backend so the same intelligence serves both interfaces
- Making AI clustering useful required richer prompts — moved from "short labels" to detailed summaries that capture distinct sub-topics within each cluster
Accomplishments we're proud of
- A complete, working product that solves a real problem people experience every day
- Three AI agents that serve dual purpose: ASI:One chat for discovery AND app backend for production
- World ID Orb verification working end-to-end with real QR code scanning
- OmegaClaw successfully discovering AskSafe, querying our API, and providing its own analysis of confusion patterns
- The "Address Cluster" flow where AI suggests but the human decides — technology augmenting, not replacing, the host
- Post-session feedback loop that gives hosts actionable insights immediately, not weeks later
What we learned
- Multi-agent architectures where the same logic serves multiple interfaces (app + ASI:One + OmegaClaw)
- World ID v4 RP signing flow for proof-of-human in web applications
- Real-time Socket.IO patterns for live classroom-scale interaction
- How to make AI clustering genuinely useful (rich summaries with context, not just keyword labels)
- The importance of the "AI suggests, human decides" pattern — professors trust the tool more when they have final say
What's next for AskSafe
- Deploy frontend to production with custom domain
- Native mobile app for hosts to monitor sessions on the go
- Integration with LMS platforms (Canvas, Blackboard, Google Classroom)
- Multi-language support for international classrooms and global townhalls
- Analytics dashboard for recurring sessions — track improvement over time
- Enterprise tier for companies running regular all-hands and training sessions
Use Cases
- University lectures — Students ask questions they'd never raise their hand for
- Company all-hands — Employees surface concerns anonymously to leadership
- Board meetings — Junior members contribute without hierarchy pressure
- Community townhalls — Residents ask hard questions about local issues
- Training sessions — Trainees flag confusion without slowing the group
- Conference Q&A — Audience members submit questions during talks
Built With
- agentverse
- elevenlabs
- fetch-ai
- google-gemini
- mongodb-atlas
- next-js
- omegaclaw
- python
- resend
- socket-io
- tailwind-css
- typescript
- world-id
Log in or sign up for Devpost to join the conversation.