# SlackBot - AI-Powered Slack Agent for Team Communication

## Inspiration

In high school, I was part of a newspaper with over 150 students across multiple sections—news, features, sports, opinion, photography, design. We had set roles, strict deadlines, and relied entirely on Slack to coordinate everything.

It was chaos.

With dozens of channels for each section, critical decisions got buried in threads. "Did we approve that story?" "Who's covering the game Friday?" "What did the editor-in-chief say about the layout?" Important tasks were mentioned once in a thread and forgotten. New members joining mid-semester had no way to catch up on what had already been decided.

I watched talented writers miss assignments because they didn't see a message in #news-assignments. I saw editors waste hours re-explaining decisions that were already discussed three weeks ago in a thread nobody could find. Communication broke down not because people didn't care—but because Slack wasn't designed to preserve institutional knowledge or surface actionable items from conversations.

SlackBot is the tool I wish we had back then.

## What It Does

SlackBot is an intelligent Slack agent powered by LangGraph that transforms chaotic Slack conversations into organized, actionable insights:

  • "What did I miss in #news-team since yesterday?" → AI-generated summaries of channel activity
  • "Show me all my open tasks" → Tracks obligations and action items extracted from threads
  • "Have we discussed the photo policy before?" → Searches past conversations and surfaces relevant decisions
  • "Extract decisions from this thread" → Automatically parses discussions and saves decisions to a structured database

Instead of losing critical information in an endless stream of messages, SlackBot maintains a persistent agenda database of tasks, decisions, obligations, and action items—all extracted automatically from your team's natural conversations.

## How I Built It

### Architecture

The system uses a multi-agent graph architecture powered by LangGraph, combining LLM reasoning with structured data storage:

User Input → Intent Router → Specialized Nodes → LLM Responder → Tool Execution → Output

### Tech Stack

  • LangGraph: Orchestrates the agent's state machine with 6 specialized nodes (router, summarizer, searcher, tracker, extractor, responder)
  • OpenAI GPT-4o: Provides natural language understanding and tool-calling capabilities
  • Model Context Protocol (MCP): Integrates with Slack via the slack-mcp-server for conversations, search, and messaging
  • PostgreSQL/Supabase: Stores extracted tasks, decisions, obligations, and team knowledge
  • Redis: Caches Slack API responses (5-min cache for messages, 24hr for users/channels) to avoid rate limits
  • LangChain: Binds tools to the LLM and manages the conversation flow
  • SQLAlchemy: ORM for agenda database with models for AgendaItem, Decision, UserProfile, ThreadTitle

### Key Components

  1. Intent Classification Router

    • Uses LLM to classify user requests into 6 intent types: summarize_missed, search_previous, track_obligations, extract_decisions, send_message, general_query
    • Extracts parameters: target channel, time range, search query, thread URL, user mentions
  2. Specialized Processing Nodes

    • Each intent type has a dedicated node that prepares context for the responder
    • Example: summarizer.py prepares context for "what did I miss" queries
    • Example: extractor.py sets up thread analysis for decision extraction
  3. Responder with Tool Binding

    • Core reasoning engine that receives context and decides which tools to invoke
    • Has access to 15+ tools across Slack MCP and agenda database
    • Can call conversations_history, search_messages, agenda_db_upsert_item, etc.
    • Loops back through tool executor until task is complete
  4. Intelligent Caching Layer

    • CachedTool wrapper around MCP tools reduces redundant Slack API calls
    • User/channel data cached for 24 hours (rarely changes)
    • Message data cached for 5 minutes (balances freshness and efficiency)
    • Graceful fallback if Redis is unavailable
  5. Agenda Database Schema AgendaItem ├── type: task | decision | obligation | question | action_item ├── status: open | in_progress | completed | deferred ├── source: workspace_id, channel_id, thread_ts (Slack provenance) ├── assigned_to: user_id, user_name ├── priority: normal | high | urgent └── history: Complete audit trail of changes

### Workflow Example

When a user asks: "What decisions were made about the holiday issue in #editors?"

  1. Router classifies intent as search_previous + extracts query parameters
  2. Searcher node prepares search context
  3. Responder decides to call:
  4. conversations_search_messages("holiday issue", channel_id)
  5. search_decisions_about("holiday issue") from agenda DB
  6. Tool executor checks Redis cache → calls Slack MCP if needed → queries PostgreSQL
  7. Responder synthesizes results into coherent answer with Slack thread links
  8. Returns: "Here are the 3 decisions made about the holiday issue..." with decision items and original message links

## Challenges I Faced

### 1. MCP Server Initialization Race Condition

The Slack MCP server needs time to initialize and populate its internal cache of users and channels. Initially, tool calls would fail with "channel not found" errors because the server hadn't finished loading workspace data.

Solution: Implemented a 20-second initialization delay with cache priming. After connecting to the MCP server, the agent proactively calls channels_list and search_users to warm the cache, then waits before accepting user requests.

  # From client.py
  await self.prime_cache()  # Preload users/channels
  await asyncio.sleep(20)    # Wait for MCP initialization

  2. Redis Caching Invalidation Strategy

  Determining the right TTL for different tool types was tricky. Too short = wasted API calls. Too long = stale data.

  Solution: Implemented tiered caching based on data volatility:
  - Persistent cache (24hr): search_users, channels_list (rarely change)
  - Short TTL (5min): conversations_history, conversations_replies (balance freshness and efficiency)
  - No cache: conversations_add_message (write operations should never be cached)

  3. Intent Classification Ambiguity

  Users often phrase requests in ambiguous ways. "Check #engineering" could mean summarize, search, or extract decisions.

  Solution: Enhanced the router prompt with few-shot examples and explicit parameter extraction. The LLM now returns structured JSON with confidence scores,
  allowing fallback to general_query intent when uncertain.

  4. Tool Call Loops and Infinite Cycles

  The responder node can call tools, which return results, which trigger more tool calls. Without proper termination conditions, the agent would loop infinitely.

  Solution: Implemented tools_condition function that checks if the last message contains tool calls:
  - If yes → Route to tool executor → back to responder
  - If no → END (task complete)

  Also added maximum iteration limits and state tracking to prevent runaway loops.

  5. Database Schema Evolution

  As I added more features (decisions, FAQ answers, thread titles), the schema grew complex. Migrations became error-prone.

  Solution: Used Alembic for version-controlled database migrations. Each schema change gets a migration file with upgrade() and downgrade() functions, making it
  safe to evolve the database structure.

  6. Asynchronous Everything

  Mixing sync and async code caused blocking issues. Slack MCP tools are async, SQLAlchemy needed async sessions, Redis needed async operations.

  Solution: Went fully async from top to bottom:
  - asyncio.run() in main entry point
  - async def for all service methods
  - AsyncSession for SQLAlchemy
  - aioredis for Redis operations

  What I Learned

  Technical Skills

  - LangGraph state machines: How to design multi-node graphs with conditional edges and loops
  - Tool binding with LLMs: Exposing Python functions as tools via LangChain's @tool decorator
  - Model Context Protocol: Connecting to MCP servers and wrapping tools for LLM use
  - Async Python architecture: Building fully async systems with proper error handling
  - Caching strategies: Tiered caching based on data volatility and access patterns
  - Structured logging: Using structlog for context-aware debugging

  Product Insights

  - Context is everything: The same query ("check #engineering") means different things depending on user intent. Good classification is critical.
  - Caching makes or breaks UX: Without Redis caching, response times were 3-5 seconds. With caching, sub-500ms for cache hits.
  - Provenance matters: Storing source_channel_id and thread_ts for every agenda item lets users jump back to the original conversation—essential for trust.
  - Simplicity wins: Initially I tried to implement complex RAG (Retrieval-Augmented Generation) for search. Simple Slack search + LLM summarization worked better.

  Design Lessons

  - Fail gracefully: Redis being down shouldn't break the entire system—caching should be transparent
  - Log everything: Structured logging with context variables saved hours of debugging MCP tool failures
  - Test with real data: Synthetic test messages don't capture the messiness of real Slack conversations with emojis, threads, reactions, and formatting

  What's Next

  If I continue building SlackBot, here are the features I'd add:

  1. Proactive notifications: "You have 3 overdue tasks" sent to Slack DMs every morning
  2. Smart @mentions: Automatically tag people when tasks are assigned from thread extraction
  3. Decision conflict detection: "This contradicts a decision made 2 weeks ago—are you sure?"
  4. Visual dashboard: Web UI showing team obligations, decision timeline, and activity heatmaps
  5. Multi-workspace support: Scale beyond a single Slack workspace for organizations with multiple teams

  ---
  SlackBot turns Slack from a chaotic message stream into an intelligent system that remembers decisions, tracks obligations, and surfaces the right information at
  the right time. It's the tool every high-velocity team deserves.

Built With

Share this project:

Updates