Gorilla Survey
Inspiration
Traditional surveys are rigid, impersonal, and increasingly ignored. We were inspired by how people actually talk about products online through posts, comments, memes, and complaints shared organically on social media. Instead of forcing users to fill out static forms, we wanted to meet them where they already are and collect feedback in a way that feels natural, conversational, and human. The idea was to combine guerrilla-style marketing tactics with conversational AI to fundamentally rethink how surveys are conducted.
What it does
Gorilla Survey replaces traditional surveys with AI-driven conversations conducted through social media platforms.
First, we identify users who have already interacted with a product online. This includes tweets, posts, hashtags, mentions, comments, tags, pins, and other public signals that indicate real product exposure or usage.
We then collect this data using large-scale social media extraction tools and filter it to find high-intent leads. Once identified, our system reaches out to these users via direct messages or platform-specific messaging channels, simulating genuine interest in their experience with the product.
Rather than presenting users with a list of questions, the chatbot engages them in a natural conversation. The qualitative insights gathered from these conversations are then analyzed and converted into structured, quantitative survey results that businesses can act on.
How we built it
The workflow begins with the client creating a survey using an existing platform such as SurveyMonkey. We extract the survey questions through their developer API and use them as the backbone for our conversational prompts.
Next, we use Yellowcake to identify relevant social media content that references the product being surveyed. This includes posts tagging the company, Reddit comments discussing the product, or Instagram images where the product is visibly worn or used.
Once potential leads are identified, our first agent processes this data through SAM to qualify users and add contextual understanding. This step ensures we are contacting people who are genuinely relevant and likely to provide meaningful feedback.
After qualification, our chat agent initiates conversations directly on the platform where the user was discovered. The agent is provided with both the survey questions and the contextual signals we collected earlier. It starts with a natural conversational opener and gradually probes the user’s opinions based on the survey goals.
To guide the conversation, we use three internal scoring tools:
- Net Promoter Score (NPS): Estimates how likely the user is to recommend or criticize the product in real time.
- Insight Depth Score (IDS): Measures how detailed and actionable the user’s feedback is, distinguishing between shallow reactions and specific critiques.
- Engagement Friction Index (EFI): Tracks how engaged the user is throughout the conversation. A low EFI signals when the agent should either offer compensation or gracefully exit the interaction.
Once conversations conclude, a third agent converts the qualitative dialogue into structured responses that map directly to the original survey questions. The system also generates summaries segmented by low, medium, and high NPS users.
All insights are presented through a clean, intuitive UI that emphasizes readability using charts, graphs, and concise summaries.
Challenges we ran into
One of our biggest challenges was balancing runtime with output quality. Our system relies on multiple specialized agents, which significantly improves insight quality but increases computational cost and latency.
We also faced reliability issues during development. At the time, we primarily used Yellowcake to scrape Reddit threads and analyze comments. While the API was easy to work with, periodic outages forced us to redesign parts of our pipeline to be more fault-tolerant and modular.
Accomplishments that we're proud of
We’re especially proud of tackling a space that hasn’t meaningfully changed in years. Instead of improving existing survey tools incrementally, we reimagined surveys from a human-first perspective. By focusing on psychology, conversational flow, and user comfort, we built a system that encourages participation rather than demanding it.
What we learned
We learned that high-quality feedback isn’t about asking better questions, but about asking questions at the right moment and in the right context. We also gained experience designing multi-agent systems, handling noisy real-world social data, and translating unstructured human conversation into structured business insights.
What’s next for Gorilla Survey
Next, we plan to expand platform support, improve agent efficiency, and introduce adaptive incentives based on real-time engagement signals. We also want to give clients more control over conversational tone and targeting criteria, allowing Gorilla Survey to scale across industries while maintaining authentic human interaction.
Built With
- html
- javascript
- openai
- python
- solace
- surveymonkeyapi
- yellowcake

Log in or sign up for Devpost to join the conversation.