Inspiration
Like many, I often turn to Reddit for authentic, community-driven opinions before deciding – choosing a new gadget, planning a trip, or just understanding public sentiment on a topic. However, this process usually involves hours of sifting through countless threads, comments, and often conflicting viewpoints. I was inspired to build "Consensus" to automate this "digital archaeology," leveraging powerful AI to distil the collective wisdom of Reddit into clear, actionable insights, effectively solving the "too much information" problem for a platform rich with genuine user experiences. The idea of creating a "Perplexity for Reddit" specifically excited me.
What it does
Consensus is a Perplexity-powered research assistant that synthesizes discussions exclusively from Reddit.com. Users can input any query (e.g., "best noise-cancelling headphones for travel," "opinions on remote work trends," "hidden gems in Kyoto") and optionally specify key priorities (like "battery life" or "affordability") and publication date ranges for the discussions analyzed.
Consensus then uses Perplexity's Sonar API to:
- Search Reddit for relevant discussions within the specified parameters.
- Analyze these discussions to understand the overall sentiment.
- Extract key positive aspects (Pros) and negative aspects (Cons/Issues).
- Identify noteworthy mentions (specific products, services, or key themes).
- Summarize actionable insights and key takeaways.
- Highlight any significant contrasting opinions found.
The application presents this information in a structured, easy-to-digest JSON format (displayed cleanly in the UI), helping users quickly grasp the community's true voice without manual effort. It also features:
- Sonar Meta-Mind (Evidence Cloud): Visualizes key terms and themes Sonar focused on during its analysis by processing the API's "think block," offering transparency into the AI's reasoning.
- Query Navigator: Suggests relevant follow-up questions based on the initial query and results, guiding users on their research journey.
How we built it
"Consensus" is built as a web application using Next.js and Perplexity Sonar API featuring:
- Frontend: Next.js (React) with TypeScript, styled with Shadcn UI and Tailwind CSS for a modern and responsive user interface.
- State Management: React's
useStateanduseEffecthooks for managing user inputs, API call states (loading, error, results), and dynamic UI updates like the multi-state loading indicator and conditional filter display. - Perplexity Sonar API Integration:
- Primary Analysis Call:
- Model:
sonar-reasoning-profor its advanced synthesis capabilities. - Prompts: Dynamically constructed System and User prompts. The System prompt sets the AI's persona and overall JSON output guidelines. The User prompt includes the user's natural language query, and specific instructions to analyze only
Reddit.comcontent, and incorporates user-defined priorities and date filter information. -
search_domain_filter: ["reddit.com"]: Crucial for restricting the search exclusively to Reddit. -
response_format: { type: "json_schema", json_schema: { schema: OUR_DEFINED_SCHEMA } }: This ensures the API returns a predictable, structured JSON output containing sections like sentiment, pros, cons, noteworthy mentions, takeaways, and contrasting opinions. -
search_after_date_filter/search_before_date_filter: Implemented based on user input to filter Reddit discussions by publication date. -
web_search_options: { search_context_size: "high" }: Chosen after experimentation to allow for deeper context retrieval from Reddit discussions. -
max_tokens: Set to a generous value (e.g., 8000) to accommodate verbose<think>blocks and detailed JSON output.
- Model:
- Secondary API Call (for Sonar Meta-Mind - Evidence Cloud):
- Model:
sonarfor text extraction from the<think>block of the primary response. - Prompt: Instruct the model to extract key entities, attributes, and themes from the provided
<think>block content. -
response_format: { type: "json_schema", ... }: Ensures this secondary call also returns structured JSON.
- Model:
- Tertiary API Call (for Query Navigator - Related Questions):
- Model:
sonar - Prompt: Takes the original user query and a condensed summary of the main "Consensus" results to generate relevant follow-up questions.
-
response_format: { type: "json_schema", ... }: Returns a JSON array of question strings.
- Model:
- Primary Analysis Call:
- UI for Results & Features:
- The main results are displayed using Shadcn UI Cards, Accordions, and Badges for a clean presentation of the structured JSON.
- The "Evidence Cloud" uses the
react-tag cloudpackage to visualize data from the Meta-Mind API call. - The "Query Navigator" displays suggested questions as clickable buttons.
Challenges we ran into
- Reliable URL Citations: Initially, we aimed to include direct Reddit URL citations for each point. However, I found that even with specific prompting, the LLM would often hallucinate URLs or provide links that weren't directly relevant to the user query.
- Adaptation: To prioritize user experience and avoid frustration from bad links, we pivoted to a URL-less output, focusing on the quality of the synthesized content itself, while still ensuring the analysis was grounded in Reddit via
search_domain_filter.
- Adaptation: To prioritize user experience and avoid frustration from bad links, we pivoted to a URL-less output, focusing on the quality of the synthesized content itself, while still ensuring the analysis was grounded in Reddit via
- Ensuring Consistent Structured JSON Output: Early iterations without strict
json_schemainresponse_formatsometimes led to variations in output or the inclusion of textual artefacts.- Adaptation: Implementing
response_formatwith a detailedjson_schemawas key to achieving reliable, parsable JSON, making our UI rendering more robust. We also built resilient parsing logic to handle potential<think>blocks or markdowns.
- Adaptation: Implementing
- API Response Truncation (
finish_reason: "length"): For complex queries or very detailed JSON schema requests, we occasionally hitmax_tokenslimits, leading to truncated and unparsable responses.- Adaptation: This necessitated careful error handling for this specific
finish_reasonand iteratively increasingmax_tokensto a sufficient level (e.g., 8000).
- Adaptation: This necessitated careful error handling for this specific
- User Experience for API Latency: The ~30-40-second response time for a deep analysis is significant.
- Adaptation: We implemented a multi-stage loading indicator with dynamic messages to make the wait feel more transparent and engaging, reassuring the user that meaningful work is happening.
Accomplishments that we're proud of
- Successfully forcing the
sonar-reasoning-promodel to focus exclusively onReddit.comusingsearch_domain_filter. - Successfully integrating and showcasing multiple Perplexity Sonar API parameters (
search_domain_filter,date_filters,response_format,search_context_size). - Achieving reliably structured JSON output by utilizing the
response_format: json_schemafeature, which is crucial for a predictable UI. - Implementing the "Sonar Meta-Mind (Evidence Cloud)" feature, which makes a secondary API call to process the primary call's
<think>block, offering users a unique glimpse into the AI's focus areas. - Designing and implementing the "Query Navigator," which uses another API call to generate contextually relevant follow-up questions based on the initial results, creating a more interactive research experience.
- Crafting a multi-state loading indicator that significantly improves the user experience during the ~30-40 second API processing time.
- Building a clean, intuitive UI with Next.js and Shadcn UI that effectively presents complex, synthesized information.
What we learned
- The power of
search_domain_filteris immense for creating specialized, source-specific AI applications. - Direct URL citation from LLMs for dynamic, broad content sources like Reddit is still a hard problem. Focusing on the quality of synthesized content, rather than potentially unreliable links, can lead to a better UX.
- Secondary LLM calls can effectively post-process or enhance the output of a primary LLM call (e.g., summarizing
<think>blocks or generating related questions).
What's next for Consensus
- Server-Side API Calls: Move all Perplexity API calls to a secure backend (Next.js API routes) to protect the API key and enable potential caching.
- User Accounts & History: Allow users to save their "Consensus" reports and view their search history.
- Deeper "Sonar Meta-Mind" Visualization: Explore more sophisticated ways to visualize the "Research Trail" beyond the word cloud, perhaps a simplified flowchart of the
<think>block summarization can yield structured steps. - Export/Share Options: Allow users to export their structured summary (e.g., as Markdown or a simple PDF).
Built With
- next.js
Log in or sign up for Devpost to join the conversation.